00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1006 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.231 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.244 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.256 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.256 > git config core.sparsecheckout # timeout=10 00:00:05.268 > git read-tree -mu HEAD # timeout=10 00:00:05.283 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.305 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.305 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.389 [Pipeline] Start of Pipeline 00:00:05.402 [Pipeline] library 00:00:05.404 Loading library shm_lib@master 00:00:05.404 Library shm_lib@master is cached. Copying from home. 00:00:05.417 [Pipeline] node 00:00:05.437 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.438 [Pipeline] { 00:00:05.447 [Pipeline] catchError 00:00:05.448 [Pipeline] { 00:00:05.458 [Pipeline] wrap 00:00:05.464 [Pipeline] { 00:00:05.470 [Pipeline] stage 00:00:05.471 [Pipeline] { (Prologue) 00:00:05.487 [Pipeline] echo 00:00:05.488 Node: VM-host-SM38 00:00:05.493 [Pipeline] cleanWs 00:00:05.504 [WS-CLEANUP] Deleting project workspace... 00:00:05.504 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.510 [WS-CLEANUP] done 00:00:05.732 [Pipeline] setCustomBuildProperty 00:00:05.846 [Pipeline] httpRequest 00:00:06.173 [Pipeline] echo 00:00:06.174 Sorcerer 10.211.164.20 is alive 00:00:06.182 [Pipeline] retry 00:00:06.183 [Pipeline] { 00:00:06.193 [Pipeline] httpRequest 00:00:06.198 HttpMethod: GET 00:00:06.199 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.200 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.201 Response Code: HTTP/1.1 200 OK 00:00:06.201 Success: Status code 200 is in the accepted range: 200,404 00:00:06.202 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.064 [Pipeline] } 00:00:08.082 [Pipeline] // retry 00:00:08.090 [Pipeline] sh 00:00:08.378 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.397 [Pipeline] httpRequest 00:00:08.763 [Pipeline] echo 00:00:08.765 Sorcerer 10.211.164.20 is alive 00:00:08.777 [Pipeline] retry 00:00:08.780 [Pipeline] { 00:00:08.796 [Pipeline] httpRequest 00:00:08.802 HttpMethod: GET 00:00:08.802 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.803 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.825 Response Code: HTTP/1.1 200 OK 00:00:08.826 Success: Status code 200 is in the accepted range: 200,404 00:00:08.826 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:58.722 [Pipeline] } 00:00:58.740 [Pipeline] // retry 00:00:58.749 [Pipeline] sh 00:00:59.035 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:01.595 [Pipeline] sh 00:01:01.878 + git -C spdk log --oneline -n5 00:01:01.878 c13c99a5e test: Various fixes for Fedora40 00:01:01.878 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:01.878 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:01.878 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:01.878 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:01.901 [Pipeline] withCredentials 00:01:01.914 > git --version # timeout=10 00:01:01.929 > git --version # 'git version 2.39.2' 00:01:01.949 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:01.951 [Pipeline] { 00:01:01.960 [Pipeline] retry 00:01:01.962 [Pipeline] { 00:01:01.978 [Pipeline] sh 00:01:02.262 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:02.531 [Pipeline] } 00:01:02.549 [Pipeline] // retry 00:01:02.554 [Pipeline] } 00:01:02.571 [Pipeline] // withCredentials 00:01:02.581 [Pipeline] httpRequest 00:01:03.304 [Pipeline] echo 00:01:03.311 Sorcerer 10.211.164.20 is alive 00:01:03.331 [Pipeline] retry 00:01:03.333 [Pipeline] { 00:01:03.342 [Pipeline] httpRequest 00:01:03.346 HttpMethod: GET 00:01:03.346 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:03.347 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:03.352 Response Code: HTTP/1.1 200 OK 00:01:03.352 Success: Status code 200 is in the accepted range: 200,404 00:01:03.352 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.597 [Pipeline] } 00:01:30.607 [Pipeline] // retry 00:01:30.613 [Pipeline] sh 00:01:30.894 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:32.870 [Pipeline] sh 00:01:33.157 + git -C dpdk log --oneline -n5 00:01:33.157 eeb0605f11 version: 23.11.0 00:01:33.157 238778122a doc: update release notes for 23.11 00:01:33.157 46aa6b3cfc doc: fix description of RSS features 00:01:33.157 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:33.157 7e421ae345 devtools: support skipping forbid rule check 00:01:33.179 [Pipeline] writeFile 00:01:33.196 [Pipeline] sh 00:01:33.483 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:33.496 [Pipeline] sh 00:01:33.780 + cat autorun-spdk.conf 00:01:33.780 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.780 SPDK_TEST_NVME=1 00:01:33.780 SPDK_TEST_FTL=1 00:01:33.780 SPDK_TEST_ISAL=1 00:01:33.780 SPDK_RUN_ASAN=1 00:01:33.780 SPDK_RUN_UBSAN=1 00:01:33.780 SPDK_TEST_XNVME=1 00:01:33.780 SPDK_TEST_NVME_FDP=1 00:01:33.780 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:33.780 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:33.780 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.789 RUN_NIGHTLY=1 00:01:33.791 [Pipeline] } 00:01:33.807 [Pipeline] // stage 00:01:33.826 [Pipeline] stage 00:01:33.829 [Pipeline] { (Run VM) 00:01:33.842 [Pipeline] sh 00:01:34.128 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:34.128 + echo 'Start stage prepare_nvme.sh' 00:01:34.128 Start stage prepare_nvme.sh 00:01:34.128 + [[ -n 2 ]] 00:01:34.128 + disk_prefix=ex2 00:01:34.128 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:34.128 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:34.128 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:34.128 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.128 ++ SPDK_TEST_NVME=1 00:01:34.128 ++ SPDK_TEST_FTL=1 00:01:34.128 ++ SPDK_TEST_ISAL=1 00:01:34.128 ++ SPDK_RUN_ASAN=1 00:01:34.128 ++ SPDK_RUN_UBSAN=1 00:01:34.128 ++ SPDK_TEST_XNVME=1 00:01:34.128 ++ SPDK_TEST_NVME_FDP=1 00:01:34.128 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:34.128 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:34.128 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.128 ++ RUN_NIGHTLY=1 00:01:34.128 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:34.128 + nvme_files=() 00:01:34.128 + declare -A nvme_files 00:01:34.128 + backend_dir=/var/lib/libvirt/images/backends 00:01:34.128 + nvme_files['nvme.img']=5G 00:01:34.128 + nvme_files['nvme-cmb.img']=5G 00:01:34.128 + nvme_files['nvme-multi0.img']=4G 00:01:34.128 + nvme_files['nvme-multi1.img']=4G 00:01:34.128 + nvme_files['nvme-multi2.img']=4G 00:01:34.128 + nvme_files['nvme-openstack.img']=8G 00:01:34.128 + nvme_files['nvme-zns.img']=5G 00:01:34.128 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:34.128 + (( SPDK_TEST_FTL == 1 )) 00:01:34.128 + nvme_files["nvme-ftl.img"]=6G 00:01:34.128 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:34.128 + nvme_files["nvme-fdp.img"]=1G 00:01:34.128 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:34.128 + for nvme in "${!nvme_files[@]}" 00:01:34.128 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:34.389 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.389 + for nvme in "${!nvme_files[@]}" 00:01:34.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:35.331 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:35.331 + for nvme in "${!nvme_files[@]}" 00:01:35.331 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:35.331 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.331 + for nvme in "${!nvme_files[@]}" 00:01:35.331 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:35.331 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.331 + for nvme in "${!nvme_files[@]}" 00:01:35.331 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:35.590 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.590 + for nvme in "${!nvme_files[@]}" 00:01:35.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:35.848 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.848 + for nvme in "${!nvme_files[@]}" 00:01:35.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:36.108 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.108 + for nvme in "${!nvme_files[@]}" 00:01:36.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:36.375 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:36.375 + for nvme in "${!nvme_files[@]}" 00:01:36.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:37.317 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.317 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:37.317 + echo 'End stage prepare_nvme.sh' 00:01:37.317 End stage prepare_nvme.sh 00:01:37.331 [Pipeline] sh 00:01:37.617 + DISTRO=fedora39 00:01:37.617 + CPUS=10 00:01:37.617 + RAM=12288 00:01:37.617 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.617 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:37.617 00:01:37.617 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:37.617 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:37.617 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:37.617 HELP=0 00:01:37.617 DRY_RUN=0 00:01:37.617 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:37.617 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:37.617 NVME_AUTO_CREATE=0 00:01:37.617 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:37.617 NVME_CMB=,,,, 00:01:37.617 NVME_PMR=,,,, 00:01:37.617 NVME_ZNS=,,,, 00:01:37.617 NVME_MS=true,,,, 00:01:37.617 NVME_FDP=,,,on, 00:01:37.617 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.617 SPDK_VAGRANT_VMCPU=10 00:01:37.617 SPDK_VAGRANT_VMRAM=12288 00:01:37.617 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.617 SPDK_VAGRANT_HTTP_PROXY= 00:01:37.617 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.617 SPDK_OPENSTACK_NETWORK=0 00:01:37.617 VAGRANT_PACKAGE_BOX=0 00:01:37.617 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.617 FORCE_DISTRO=true 00:01:37.617 VAGRANT_BOX_VERSION= 00:01:37.617 EXTRA_VAGRANTFILES= 00:01:37.617 NIC_MODEL=e1000 00:01:37.617 00:01:37.617 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:37.617 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:40.158 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.420 ==> default: Creating image (snapshot of base box volume). 00:01:40.680 ==> default: Creating domain with the following settings... 00:01:40.680 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732751754_4c3646d00c5ae67725dd 00:01:40.680 ==> default: -- Domain type: kvm 00:01:40.680 ==> default: -- Cpus: 10 00:01:40.680 ==> default: -- Feature: acpi 00:01:40.680 ==> default: -- Feature: apic 00:01:40.680 ==> default: -- Feature: pae 00:01:40.680 ==> default: -- Memory: 12288M 00:01:40.680 ==> default: -- Memory Backing: hugepages: 00:01:40.680 ==> default: -- Management MAC: 00:01:40.680 ==> default: -- Loader: 00:01:40.680 ==> default: -- Nvram: 00:01:40.680 ==> default: -- Base box: spdk/fedora39 00:01:40.680 ==> default: -- Storage pool: default 00:01:40.680 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732751754_4c3646d00c5ae67725dd.img (20G) 00:01:40.680 ==> default: -- Volume Cache: default 00:01:40.680 ==> default: -- Kernel: 00:01:40.680 ==> default: -- Initrd: 00:01:40.680 ==> default: -- Graphics Type: vnc 00:01:40.680 ==> default: -- Graphics Port: -1 00:01:40.680 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.680 ==> default: -- Graphics Password: Not defined 00:01:40.680 ==> default: -- Video Type: cirrus 00:01:40.680 ==> default: -- Video VRAM: 9216 00:01:40.680 ==> default: -- Sound Type: 00:01:40.680 ==> default: -- Keymap: en-us 00:01:40.680 ==> default: -- TPM Path: 00:01:40.680 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.680 ==> default: -- Command line args: 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:40.680 ==> default: -> value=-drive, 00:01:40.680 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:40.680 ==> default: -> value=-device, 00:01:40.680 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.939 ==> default: Creating shared folders metadata... 00:01:40.939 ==> default: Starting domain. 00:01:43.474 ==> default: Waiting for domain to get an IP address... 00:02:05.436 ==> default: Waiting for SSH to become available... 00:02:05.436 ==> default: Configuring and enabling network interfaces... 00:02:07.986 default: SSH address: 192.168.121.8:22 00:02:07.986 default: SSH username: vagrant 00:02:07.986 default: SSH auth method: private key 00:02:09.908 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:16.518 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.083 ==> default: Mounting SSHFS shared folder... 00:02:23.664 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:23.664 ==> default: Checking Mount.. 00:02:25.036 ==> default: Folder Successfully Mounted! 00:02:25.036 00:02:25.036 SUCCESS! 00:02:25.036 00:02:25.036 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:25.036 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:25.036 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:25.036 00:02:25.046 [Pipeline] } 00:02:25.064 [Pipeline] // stage 00:02:25.074 [Pipeline] dir 00:02:25.074 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:25.077 [Pipeline] { 00:02:25.091 [Pipeline] catchError 00:02:25.093 [Pipeline] { 00:02:25.104 [Pipeline] sh 00:02:25.380 + vagrant ssh-config --host vagrant 00:02:25.380 + sed -ne '/^Host/,$p' 00:02:25.380 + tee ssh_conf 00:02:27.908 Host vagrant 00:02:27.908 HostName 192.168.121.8 00:02:27.908 User vagrant 00:02:27.908 Port 22 00:02:27.908 UserKnownHostsFile /dev/null 00:02:27.908 StrictHostKeyChecking no 00:02:27.908 PasswordAuthentication no 00:02:27.908 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:27.908 IdentitiesOnly yes 00:02:27.908 LogLevel FATAL 00:02:27.908 ForwardAgent yes 00:02:27.908 ForwardX11 yes 00:02:27.908 00:02:27.921 [Pipeline] withEnv 00:02:27.923 [Pipeline] { 00:02:27.940 [Pipeline] sh 00:02:28.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:28.237 source /etc/os-release 00:02:28.237 [[ -e /image.version ]] && img=$(< /image.version) 00:02:28.237 # Minimal, systemd-like check. 00:02:28.237 if [[ -e /.dockerenv ]]; then 00:02:28.237 # Clear garbage from the node'\''s name: 00:02:28.237 # agt-er_autotest_547-896 -> autotest_547-896 00:02:28.237 # $HOSTNAME is the actual container id 00:02:28.237 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:28.237 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:28.237 # We can assume this is a mount from a host where container is running, 00:02:28.237 # so fetch its hostname to easily identify the target swarm worker. 00:02:28.237 container="$(< /etc/hostname) ($agent)" 00:02:28.237 else 00:02:28.237 # Fallback 00:02:28.237 container=$agent 00:02:28.237 fi 00:02:28.237 fi 00:02:28.237 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:28.237 ' 00:02:28.503 [Pipeline] } 00:02:28.520 [Pipeline] // withEnv 00:02:28.527 [Pipeline] setCustomBuildProperty 00:02:28.540 [Pipeline] stage 00:02:28.542 [Pipeline] { (Tests) 00:02:28.558 [Pipeline] sh 00:02:28.834 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:29.103 [Pipeline] sh 00:02:29.376 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:29.390 [Pipeline] timeout 00:02:29.390 Timeout set to expire in 50 min 00:02:29.392 [Pipeline] { 00:02:29.406 [Pipeline] sh 00:02:29.684 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:30.249 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:30.261 [Pipeline] sh 00:02:30.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:30.873 [Pipeline] sh 00:02:31.148 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:31.423 [Pipeline] sh 00:02:31.702 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:31.702 ++ readlink -f spdk_repo 00:02:31.960 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.960 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.960 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.960 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.960 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.960 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.960 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.960 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:31.960 + cd /home/vagrant/spdk_repo 00:02:31.960 + source /etc/os-release 00:02:31.960 ++ NAME='Fedora Linux' 00:02:31.960 ++ VERSION='39 (Cloud Edition)' 00:02:31.960 ++ ID=fedora 00:02:31.960 ++ VERSION_ID=39 00:02:31.960 ++ VERSION_CODENAME= 00:02:31.960 ++ PLATFORM_ID=platform:f39 00:02:31.960 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:31.960 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.960 ++ LOGO=fedora-logo-icon 00:02:31.960 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:31.960 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.960 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:31.960 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.960 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.960 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.960 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:31.960 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.960 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:31.960 ++ SUPPORT_END=2024-11-12 00:02:31.960 ++ VARIANT='Cloud Edition' 00:02:31.960 ++ VARIANT_ID=cloud 00:02:31.960 + uname -a 00:02:31.960 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:31.960 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:31.960 Hugepages 00:02:31.960 node hugesize free / total 00:02:31.960 node0 1048576kB 0 / 0 00:02:31.960 node0 2048kB 0 / 0 00:02:31.960 00:02:31.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.960 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:31.960 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:31.960 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:31.960 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:31.960 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:31.960 + rm -f /tmp/spdk-ld-path 00:02:31.960 + source autorun-spdk.conf 00:02:31.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.960 ++ SPDK_TEST_NVME=1 00:02:31.960 ++ SPDK_TEST_FTL=1 00:02:31.960 ++ SPDK_TEST_ISAL=1 00:02:31.960 ++ SPDK_RUN_ASAN=1 00:02:31.960 ++ SPDK_RUN_UBSAN=1 00:02:32.218 ++ SPDK_TEST_XNVME=1 00:02:32.218 ++ SPDK_TEST_NVME_FDP=1 00:02:32.218 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:32.218 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.218 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.218 ++ RUN_NIGHTLY=1 00:02:32.218 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:32.218 + [[ -n '' ]] 00:02:32.218 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:32.218 + for M in /var/spdk/build-*-manifest.txt 00:02:32.218 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:32.218 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.218 + for M in /var/spdk/build-*-manifest.txt 00:02:32.218 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:32.218 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.218 + for M in /var/spdk/build-*-manifest.txt 00:02:32.218 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:32.218 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.218 ++ uname 00:02:32.218 + [[ Linux == \L\i\n\u\x ]] 00:02:32.218 + sudo dmesg -T 00:02:32.218 + sudo dmesg --clear 00:02:32.218 + dmesg_pid=5736 00:02:32.218 + [[ Fedora Linux == FreeBSD ]] 00:02:32.218 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.218 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.218 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:32.218 + [[ -x /usr/src/fio-static/fio ]] 00:02:32.218 + sudo dmesg -Tw 00:02:32.218 + export FIO_BIN=/usr/src/fio-static/fio 00:02:32.218 + FIO_BIN=/usr/src/fio-static/fio 00:02:32.218 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:32.218 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:32.218 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:32.218 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.218 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.218 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:32.218 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.218 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.218 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:32.218 Test configuration: 00:02:32.218 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.218 SPDK_TEST_NVME=1 00:02:32.218 SPDK_TEST_FTL=1 00:02:32.218 SPDK_TEST_ISAL=1 00:02:32.218 SPDK_RUN_ASAN=1 00:02:32.218 SPDK_RUN_UBSAN=1 00:02:32.218 SPDK_TEST_XNVME=1 00:02:32.218 SPDK_TEST_NVME_FDP=1 00:02:32.218 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:32.218 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.218 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.218 RUN_NIGHTLY=1 23:56:46 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:32.218 23:56:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:32.218 23:56:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:32.218 23:56:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.218 23:56:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.218 23:56:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.218 23:56:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.219 23:56:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.219 23:56:46 -- paths/export.sh@5 -- $ export PATH 00:02:32.219 23:56:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.219 23:56:46 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:32.219 23:56:46 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:32.219 23:56:46 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732751806.XXXXXX 00:02:32.219 23:56:46 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732751806.f1Wp6A 00:02:32.219 23:56:46 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:32.219 23:56:46 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:32.219 23:56:46 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.219 23:56:46 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:32.219 23:56:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:32.219 23:56:46 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:32.219 23:56:46 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:32.219 23:56:46 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:32.219 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.219 23:56:46 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:32.219 23:56:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.219 23:56:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.219 23:56:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.219 23:56:46 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.219 Wed Nov 27 11:56:46 PM UTC 2024 00:02:32.219 23:56:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.219 LTS-67-gc13c99a5e 00:02:32.219 23:56:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:32.219 23:56:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:32.219 23:56:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.219 23:56:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.219 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.219 ************************************ 00:02:32.219 START TEST asan 00:02:32.219 ************************************ 00:02:32.219 using asan 00:02:32.219 23:56:46 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:32.219 00:02:32.219 real 0m0.000s 00:02:32.219 user 0m0.000s 00:02:32.219 sys 0m0.000s 00:02:32.219 23:56:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:32.219 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.219 ************************************ 00:02:32.219 END TEST asan 00:02:32.219 ************************************ 00:02:32.477 23:56:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.477 23:56:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.477 23:56:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.477 23:56:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.477 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.477 ************************************ 00:02:32.477 START TEST ubsan 00:02:32.477 ************************************ 00:02:32.477 using ubsan 00:02:32.477 23:56:46 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:32.477 00:02:32.477 real 0m0.000s 00:02:32.477 user 0m0.000s 00:02:32.477 sys 0m0.000s 00:02:32.477 23:56:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:32.477 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.477 ************************************ 00:02:32.477 END TEST ubsan 00:02:32.477 ************************************ 00:02:32.477 23:56:46 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:32.477 23:56:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:32.477 23:56:46 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:32.477 23:56:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.477 23:56:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.477 ************************************ 00:02:32.477 START TEST build_native_dpdk 00:02:32.477 ************************************ 00:02:32.477 23:56:46 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:32.477 23:56:46 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:32.477 23:56:46 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:32.477 23:56:46 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:32.477 23:56:46 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:32.477 23:56:46 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:32.477 23:56:46 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:32.477 23:56:46 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:32.477 23:56:46 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:32.477 23:56:46 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:32.477 23:56:46 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:32.477 23:56:46 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:32.477 23:56:46 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.477 23:56:46 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:32.477 eeb0605f11 version: 23.11.0 00:02:32.477 238778122a doc: update release notes for 23.11 00:02:32.477 46aa6b3cfc doc: fix description of RSS features 00:02:32.477 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:32.477 7e421ae345 devtools: support skipping forbid rule check 00:02:32.477 23:56:46 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:32.477 23:56:46 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:32.477 23:56:46 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:32.477 23:56:46 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:32.477 23:56:46 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:32.477 23:56:46 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:32.477 23:56:46 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:32.477 23:56:46 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:32.477 23:56:46 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:32.477 23:56:46 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:32.477 23:56:46 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:32.477 23:56:46 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:32.477 23:56:46 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:32.477 23:56:46 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:32.477 23:56:46 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:32.477 23:56:46 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:32.477 23:56:46 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:32.477 23:56:46 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.477 23:56:46 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:32.477 23:56:46 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:32.477 23:56:46 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:32.477 23:56:46 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:32.477 23:56:46 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:32.477 23:56:46 -- scripts/common.sh@343 -- $ case "$op" in 00:02:32.477 23:56:46 -- scripts/common.sh@344 -- $ : 1 00:02:32.477 23:56:46 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:32.477 23:56:46 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.477 23:56:46 -- scripts/common.sh@364 -- $ decimal 23 00:02:32.477 23:56:46 -- scripts/common.sh@352 -- $ local d=23 00:02:32.477 23:56:46 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:32.478 23:56:46 -- scripts/common.sh@354 -- $ echo 23 00:02:32.478 23:56:46 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:32.478 23:56:46 -- scripts/common.sh@365 -- $ decimal 21 00:02:32.478 23:56:46 -- scripts/common.sh@352 -- $ local d=21 00:02:32.478 23:56:46 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:32.478 23:56:46 -- scripts/common.sh@354 -- $ echo 21 00:02:32.478 23:56:46 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:32.478 23:56:46 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:32.478 23:56:46 -- scripts/common.sh@366 -- $ return 1 00:02:32.478 23:56:46 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:32.478 patching file config/rte_config.h 00:02:32.478 Hunk #1 succeeded at 60 (offset 1 line). 00:02:32.478 23:56:46 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:32.478 23:56:46 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:32.478 23:56:46 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:32.478 23:56:46 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:32.478 23:56:46 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:32.478 23:56:46 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:32.478 23:56:46 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.478 23:56:46 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:32.478 23:56:46 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:32.478 23:56:46 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:32.478 23:56:46 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:32.478 23:56:46 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:32.478 23:56:46 -- scripts/common.sh@343 -- $ case "$op" in 00:02:32.478 23:56:46 -- scripts/common.sh@344 -- $ : 1 00:02:32.478 23:56:46 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:32.478 23:56:46 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.478 23:56:46 -- scripts/common.sh@364 -- $ decimal 23 00:02:32.478 23:56:46 -- scripts/common.sh@352 -- $ local d=23 00:02:32.478 23:56:46 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:32.478 23:56:46 -- scripts/common.sh@354 -- $ echo 23 00:02:32.478 23:56:46 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:32.478 23:56:46 -- scripts/common.sh@365 -- $ decimal 24 00:02:32.478 23:56:46 -- scripts/common.sh@352 -- $ local d=24 00:02:32.478 23:56:46 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:32.478 23:56:46 -- scripts/common.sh@354 -- $ echo 24 00:02:32.478 23:56:46 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:32.478 23:56:46 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:32.478 23:56:46 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:32.478 23:56:46 -- scripts/common.sh@367 -- $ return 0 00:02:32.478 23:56:46 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:32.478 patching file lib/pcapng/rte_pcapng.c 00:02:32.478 23:56:46 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:32.478 23:56:46 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:32.478 23:56:46 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:32.478 23:56:46 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:32.478 23:56:46 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:36.667 The Meson build system 00:02:36.667 Version: 1.5.0 00:02:36.667 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:36.667 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:36.667 Build type: native build 00:02:36.667 Program cat found: YES (/usr/bin/cat) 00:02:36.667 Project name: DPDK 00:02:36.667 Project version: 23.11.0 00:02:36.667 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.667 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:36.667 Host machine cpu family: x86_64 00:02:36.667 Host machine cpu: x86_64 00:02:36.667 Message: ## Building in Developer Mode ## 00:02:36.667 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.667 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:36.667 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.667 Program python3 found: YES (/usr/bin/python3) 00:02:36.667 Program cat found: YES (/usr/bin/cat) 00:02:36.667 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:36.667 Compiler for C supports arguments -march=native: YES 00:02:36.667 Checking for size of "void *" : 8 00:02:36.667 Checking for size of "void *" : 8 (cached) 00:02:36.667 Library m found: YES 00:02:36.667 Library numa found: YES 00:02:36.667 Has header "numaif.h" : YES 00:02:36.667 Library fdt found: NO 00:02:36.667 Library execinfo found: NO 00:02:36.667 Has header "execinfo.h" : YES 00:02:36.667 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.667 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.667 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.667 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.667 Run-time dependency openssl found: YES 3.1.1 00:02:36.667 Run-time dependency libpcap found: YES 1.10.4 00:02:36.667 Has header "pcap.h" with dependency libpcap: YES 00:02:36.667 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.667 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.667 Compiler for C supports arguments -Wformat: YES 00:02:36.667 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:36.667 Compiler for C supports arguments -Wformat-security: NO 00:02:36.667 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.667 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.667 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.667 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.667 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.667 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.667 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.667 Compiler for C supports arguments -Wundef: YES 00:02:36.667 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.667 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.667 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.667 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.667 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:36.667 Program objdump found: YES (/usr/bin/objdump) 00:02:36.667 Compiler for C supports arguments -mavx512f: YES 00:02:36.667 Checking if "AVX512 checking" compiles: YES 00:02:36.667 Fetching value of define "__SSE4_2__" : 1 00:02:36.667 Fetching value of define "__AES__" : 1 00:02:36.667 Fetching value of define "__AVX__" : 1 00:02:36.667 Fetching value of define "__AVX2__" : 1 00:02:36.667 Fetching value of define "__AVX512BW__" : 1 00:02:36.667 Fetching value of define "__AVX512CD__" : 1 00:02:36.667 Fetching value of define "__AVX512DQ__" : 1 00:02:36.667 Fetching value of define "__AVX512F__" : 1 00:02:36.667 Fetching value of define "__AVX512VL__" : 1 00:02:36.667 Fetching value of define "__PCLMUL__" : 1 00:02:36.667 Fetching value of define "__RDRND__" : 1 00:02:36.667 Fetching value of define "__RDSEED__" : 1 00:02:36.667 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:36.667 Fetching value of define "__znver1__" : (undefined) 00:02:36.667 Fetching value of define "__znver2__" : (undefined) 00:02:36.667 Fetching value of define "__znver3__" : (undefined) 00:02:36.667 Fetching value of define "__znver4__" : (undefined) 00:02:36.667 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.667 Message: lib/log: Defining dependency "log" 00:02:36.667 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.667 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.667 Checking for function "getentropy" : NO 00:02:36.667 Message: lib/eal: Defining dependency "eal" 00:02:36.667 Message: lib/ring: Defining dependency "ring" 00:02:36.667 Message: lib/rcu: Defining dependency "rcu" 00:02:36.667 Message: lib/mempool: Defining dependency "mempool" 00:02:36.667 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.667 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.667 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:36.667 Compiler for C supports arguments -mpclmul: YES 00:02:36.667 Compiler for C supports arguments -maes: YES 00:02:36.667 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.667 Compiler for C supports arguments -mavx512bw: YES 00:02:36.667 Compiler for C supports arguments -mavx512dq: YES 00:02:36.667 Compiler for C supports arguments -mavx512vl: YES 00:02:36.667 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.667 Compiler for C supports arguments -mavx2: YES 00:02:36.667 Compiler for C supports arguments -mavx: YES 00:02:36.667 Message: lib/net: Defining dependency "net" 00:02:36.667 Message: lib/meter: Defining dependency "meter" 00:02:36.667 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.667 Message: lib/pci: Defining dependency "pci" 00:02:36.667 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.667 Message: lib/metrics: Defining dependency "metrics" 00:02:36.667 Message: lib/hash: Defining dependency "hash" 00:02:36.667 Message: lib/timer: Defining dependency "timer" 00:02:36.667 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.667 Message: lib/acl: Defining dependency "acl" 00:02:36.667 Message: lib/bbdev: Defining dependency "bbdev" 00:02:36.667 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:36.667 Run-time dependency libelf found: YES 0.191 00:02:36.667 Message: lib/bpf: Defining dependency "bpf" 00:02:36.667 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:36.667 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.667 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.667 Message: lib/distributor: Defining dependency "distributor" 00:02:36.667 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.667 Message: lib/efd: Defining dependency "efd" 00:02:36.667 Message: lib/eventdev: Defining dependency "eventdev" 00:02:36.667 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:36.667 Message: lib/gpudev: Defining dependency "gpudev" 00:02:36.667 Message: lib/gro: Defining dependency "gro" 00:02:36.667 Message: lib/gso: Defining dependency "gso" 00:02:36.667 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:36.667 Message: lib/jobstats: Defining dependency "jobstats" 00:02:36.667 Message: lib/latencystats: Defining dependency "latencystats" 00:02:36.667 Message: lib/lpm: Defining dependency "lpm" 00:02:36.667 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512IFMA__" : 1 00:02:36.667 Message: lib/member: Defining dependency "member" 00:02:36.667 Message: lib/pcapng: Defining dependency "pcapng" 00:02:36.667 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.667 Message: lib/power: Defining dependency "power" 00:02:36.667 Message: lib/rawdev: Defining dependency "rawdev" 00:02:36.667 Message: lib/regexdev: Defining dependency "regexdev" 00:02:36.667 Message: lib/mldev: Defining dependency "mldev" 00:02:36.667 Message: lib/rib: Defining dependency "rib" 00:02:36.667 Message: lib/reorder: Defining dependency "reorder" 00:02:36.667 Message: lib/sched: Defining dependency "sched" 00:02:36.667 Message: lib/security: Defining dependency "security" 00:02:36.667 Message: lib/stack: Defining dependency "stack" 00:02:36.667 Has header "linux/userfaultfd.h" : YES 00:02:36.667 Has header "linux/vduse.h" : YES 00:02:36.667 Message: lib/vhost: Defining dependency "vhost" 00:02:36.667 Message: lib/ipsec: Defining dependency "ipsec" 00:02:36.667 Message: lib/pdcp: Defining dependency "pdcp" 00:02:36.667 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.667 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.667 Message: lib/fib: Defining dependency "fib" 00:02:36.667 Message: lib/port: Defining dependency "port" 00:02:36.667 Message: lib/pdump: Defining dependency "pdump" 00:02:36.667 Message: lib/table: Defining dependency "table" 00:02:36.667 Message: lib/pipeline: Defining dependency "pipeline" 00:02:36.667 Message: lib/graph: Defining dependency "graph" 00:02:36.667 Message: lib/node: Defining dependency "node" 00:02:36.667 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.667 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.668 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.668 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.571 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:38.571 Compiler for C supports arguments -Wno-unused-value: YES 00:02:38.571 Compiler for C supports arguments -Wno-format: YES 00:02:38.571 Compiler for C supports arguments -Wno-format-security: YES 00:02:38.571 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:38.571 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.571 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:38.571 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:38.571 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:38.571 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:38.571 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.571 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:38.571 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:38.571 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:38.571 Has header "sys/epoll.h" : YES 00:02:38.571 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:38.571 Configuring doxy-api-html.conf using configuration 00:02:38.571 Configuring doxy-api-man.conf using configuration 00:02:38.571 Program mandb found: YES (/usr/bin/mandb) 00:02:38.571 Program sphinx-build found: NO 00:02:38.571 Configuring rte_build_config.h using configuration 00:02:38.571 Message: 00:02:38.571 ================= 00:02:38.571 Applications Enabled 00:02:38.571 ================= 00:02:38.571 00:02:38.571 apps: 00:02:38.571 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:38.571 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:38.571 test-pmd, test-regex, test-sad, test-security-perf, 00:02:38.571 00:02:38.571 Message: 00:02:38.571 ================= 00:02:38.571 Libraries Enabled 00:02:38.571 ================= 00:02:38.571 00:02:38.571 libs: 00:02:38.571 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.571 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:38.571 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:38.571 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:38.571 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:38.571 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:38.571 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:38.571 00:02:38.571 00:02:38.571 Message: 00:02:38.571 =============== 00:02:38.571 Drivers Enabled 00:02:38.571 =============== 00:02:38.571 00:02:38.571 common: 00:02:38.571 00:02:38.571 bus: 00:02:38.571 pci, vdev, 00:02:38.571 mempool: 00:02:38.571 ring, 00:02:38.571 dma: 00:02:38.571 00:02:38.571 net: 00:02:38.572 i40e, 00:02:38.572 raw: 00:02:38.572 00:02:38.572 crypto: 00:02:38.572 00:02:38.572 compress: 00:02:38.572 00:02:38.572 regex: 00:02:38.572 00:02:38.572 ml: 00:02:38.572 00:02:38.572 vdpa: 00:02:38.572 00:02:38.572 event: 00:02:38.572 00:02:38.572 baseband: 00:02:38.572 00:02:38.572 gpu: 00:02:38.572 00:02:38.572 00:02:38.572 Message: 00:02:38.572 ================= 00:02:38.572 Content Skipped 00:02:38.572 ================= 00:02:38.572 00:02:38.572 apps: 00:02:38.572 00:02:38.572 libs: 00:02:38.572 00:02:38.572 drivers: 00:02:38.572 common/cpt: not in enabled drivers build config 00:02:38.572 common/dpaax: not in enabled drivers build config 00:02:38.572 common/iavf: not in enabled drivers build config 00:02:38.572 common/idpf: not in enabled drivers build config 00:02:38.572 common/mvep: not in enabled drivers build config 00:02:38.572 common/octeontx: not in enabled drivers build config 00:02:38.572 bus/auxiliary: not in enabled drivers build config 00:02:38.572 bus/cdx: not in enabled drivers build config 00:02:38.572 bus/dpaa: not in enabled drivers build config 00:02:38.572 bus/fslmc: not in enabled drivers build config 00:02:38.572 bus/ifpga: not in enabled drivers build config 00:02:38.572 bus/platform: not in enabled drivers build config 00:02:38.572 bus/vmbus: not in enabled drivers build config 00:02:38.572 common/cnxk: not in enabled drivers build config 00:02:38.572 common/mlx5: not in enabled drivers build config 00:02:38.572 common/nfp: not in enabled drivers build config 00:02:38.572 common/qat: not in enabled drivers build config 00:02:38.572 common/sfc_efx: not in enabled drivers build config 00:02:38.572 mempool/bucket: not in enabled drivers build config 00:02:38.572 mempool/cnxk: not in enabled drivers build config 00:02:38.572 mempool/dpaa: not in enabled drivers build config 00:02:38.572 mempool/dpaa2: not in enabled drivers build config 00:02:38.572 mempool/octeontx: not in enabled drivers build config 00:02:38.572 mempool/stack: not in enabled drivers build config 00:02:38.572 dma/cnxk: not in enabled drivers build config 00:02:38.572 dma/dpaa: not in enabled drivers build config 00:02:38.572 dma/dpaa2: not in enabled drivers build config 00:02:38.572 dma/hisilicon: not in enabled drivers build config 00:02:38.572 dma/idxd: not in enabled drivers build config 00:02:38.572 dma/ioat: not in enabled drivers build config 00:02:38.572 dma/skeleton: not in enabled drivers build config 00:02:38.572 net/af_packet: not in enabled drivers build config 00:02:38.572 net/af_xdp: not in enabled drivers build config 00:02:38.572 net/ark: not in enabled drivers build config 00:02:38.572 net/atlantic: not in enabled drivers build config 00:02:38.572 net/avp: not in enabled drivers build config 00:02:38.572 net/axgbe: not in enabled drivers build config 00:02:38.572 net/bnx2x: not in enabled drivers build config 00:02:38.572 net/bnxt: not in enabled drivers build config 00:02:38.572 net/bonding: not in enabled drivers build config 00:02:38.572 net/cnxk: not in enabled drivers build config 00:02:38.572 net/cpfl: not in enabled drivers build config 00:02:38.572 net/cxgbe: not in enabled drivers build config 00:02:38.572 net/dpaa: not in enabled drivers build config 00:02:38.572 net/dpaa2: not in enabled drivers build config 00:02:38.572 net/e1000: not in enabled drivers build config 00:02:38.572 net/ena: not in enabled drivers build config 00:02:38.572 net/enetc: not in enabled drivers build config 00:02:38.572 net/enetfec: not in enabled drivers build config 00:02:38.572 net/enic: not in enabled drivers build config 00:02:38.572 net/failsafe: not in enabled drivers build config 00:02:38.572 net/fm10k: not in enabled drivers build config 00:02:38.572 net/gve: not in enabled drivers build config 00:02:38.572 net/hinic: not in enabled drivers build config 00:02:38.572 net/hns3: not in enabled drivers build config 00:02:38.572 net/iavf: not in enabled drivers build config 00:02:38.572 net/ice: not in enabled drivers build config 00:02:38.572 net/idpf: not in enabled drivers build config 00:02:38.572 net/igc: not in enabled drivers build config 00:02:38.572 net/ionic: not in enabled drivers build config 00:02:38.572 net/ipn3ke: not in enabled drivers build config 00:02:38.572 net/ixgbe: not in enabled drivers build config 00:02:38.572 net/mana: not in enabled drivers build config 00:02:38.572 net/memif: not in enabled drivers build config 00:02:38.572 net/mlx4: not in enabled drivers build config 00:02:38.572 net/mlx5: not in enabled drivers build config 00:02:38.572 net/mvneta: not in enabled drivers build config 00:02:38.572 net/mvpp2: not in enabled drivers build config 00:02:38.572 net/netvsc: not in enabled drivers build config 00:02:38.572 net/nfb: not in enabled drivers build config 00:02:38.572 net/nfp: not in enabled drivers build config 00:02:38.572 net/ngbe: not in enabled drivers build config 00:02:38.572 net/null: not in enabled drivers build config 00:02:38.572 net/octeontx: not in enabled drivers build config 00:02:38.572 net/octeon_ep: not in enabled drivers build config 00:02:38.572 net/pcap: not in enabled drivers build config 00:02:38.572 net/pfe: not in enabled drivers build config 00:02:38.572 net/qede: not in enabled drivers build config 00:02:38.572 net/ring: not in enabled drivers build config 00:02:38.572 net/sfc: not in enabled drivers build config 00:02:38.572 net/softnic: not in enabled drivers build config 00:02:38.572 net/tap: not in enabled drivers build config 00:02:38.572 net/thunderx: not in enabled drivers build config 00:02:38.572 net/txgbe: not in enabled drivers build config 00:02:38.572 net/vdev_netvsc: not in enabled drivers build config 00:02:38.572 net/vhost: not in enabled drivers build config 00:02:38.572 net/virtio: not in enabled drivers build config 00:02:38.572 net/vmxnet3: not in enabled drivers build config 00:02:38.572 raw/cnxk_bphy: not in enabled drivers build config 00:02:38.572 raw/cnxk_gpio: not in enabled drivers build config 00:02:38.572 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:38.572 raw/ifpga: not in enabled drivers build config 00:02:38.572 raw/ntb: not in enabled drivers build config 00:02:38.572 raw/skeleton: not in enabled drivers build config 00:02:38.572 crypto/armv8: not in enabled drivers build config 00:02:38.572 crypto/bcmfs: not in enabled drivers build config 00:02:38.572 crypto/caam_jr: not in enabled drivers build config 00:02:38.572 crypto/ccp: not in enabled drivers build config 00:02:38.572 crypto/cnxk: not in enabled drivers build config 00:02:38.572 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.572 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.572 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.572 crypto/mlx5: not in enabled drivers build config 00:02:38.572 crypto/mvsam: not in enabled drivers build config 00:02:38.572 crypto/nitrox: not in enabled drivers build config 00:02:38.572 crypto/null: not in enabled drivers build config 00:02:38.572 crypto/octeontx: not in enabled drivers build config 00:02:38.572 crypto/openssl: not in enabled drivers build config 00:02:38.572 crypto/scheduler: not in enabled drivers build config 00:02:38.572 crypto/uadk: not in enabled drivers build config 00:02:38.572 crypto/virtio: not in enabled drivers build config 00:02:38.572 compress/isal: not in enabled drivers build config 00:02:38.572 compress/mlx5: not in enabled drivers build config 00:02:38.572 compress/octeontx: not in enabled drivers build config 00:02:38.572 compress/zlib: not in enabled drivers build config 00:02:38.572 regex/mlx5: not in enabled drivers build config 00:02:38.572 regex/cn9k: not in enabled drivers build config 00:02:38.572 ml/cnxk: not in enabled drivers build config 00:02:38.572 vdpa/ifc: not in enabled drivers build config 00:02:38.572 vdpa/mlx5: not in enabled drivers build config 00:02:38.572 vdpa/nfp: not in enabled drivers build config 00:02:38.572 vdpa/sfc: not in enabled drivers build config 00:02:38.572 event/cnxk: not in enabled drivers build config 00:02:38.572 event/dlb2: not in enabled drivers build config 00:02:38.572 event/dpaa: not in enabled drivers build config 00:02:38.572 event/dpaa2: not in enabled drivers build config 00:02:38.572 event/dsw: not in enabled drivers build config 00:02:38.572 event/opdl: not in enabled drivers build config 00:02:38.572 event/skeleton: not in enabled drivers build config 00:02:38.572 event/sw: not in enabled drivers build config 00:02:38.572 event/octeontx: not in enabled drivers build config 00:02:38.572 baseband/acc: not in enabled drivers build config 00:02:38.572 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:38.572 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:38.572 baseband/la12xx: not in enabled drivers build config 00:02:38.572 baseband/null: not in enabled drivers build config 00:02:38.572 baseband/turbo_sw: not in enabled drivers build config 00:02:38.572 gpu/cuda: not in enabled drivers build config 00:02:38.572 00:02:38.572 00:02:38.572 Build targets in project: 215 00:02:38.572 00:02:38.572 DPDK 23.11.0 00:02:38.572 00:02:38.572 User defined options 00:02:38.572 libdir : lib 00:02:38.572 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:38.572 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:38.572 c_link_args : 00:02:38.572 enable_docs : false 00:02:38.572 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:38.572 enable_kmods : false 00:02:38.572 machine : native 00:02:38.572 tests : false 00:02:38.572 00:02:38.572 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.572 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:38.572 23:56:52 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:38.572 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:38.572 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.572 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.572 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.572 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.572 [5/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.572 [6/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.572 [7/705] Linking static target lib/librte_kvargs.a 00:02:38.572 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.573 [9/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.573 [10/705] Linking static target lib/librte_log.a 00:02:38.573 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.573 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.831 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.831 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.831 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.831 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.831 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.831 [18/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.831 [19/705] Linking target lib/librte_log.so.24.0 00:02:39.090 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.090 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.090 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.090 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.090 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.090 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.090 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:39.348 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.348 [28/705] Linking target lib/librte_kvargs.so.24.0 00:02:39.348 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.348 [30/705] Linking static target lib/librte_telemetry.a 00:02:39.348 [31/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.348 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.348 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.348 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.348 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.348 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.348 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.607 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.607 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.607 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.607 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.607 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.607 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:39.607 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.607 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:39.607 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.865 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.865 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.865 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.865 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.865 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.865 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.137 [53/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.137 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.137 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.137 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.137 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.137 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.137 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.137 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.137 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.137 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.137 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.423 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.423 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.423 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.423 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.423 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.423 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.423 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.423 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.423 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.423 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.423 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.423 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.688 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.688 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.688 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.688 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.688 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.951 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.951 [82/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.951 [83/705] Linking static target lib/librte_ring.a 00:02:40.951 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.951 [85/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.951 [86/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.951 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.951 [88/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.951 [89/705] Linking static target lib/librte_eal.a 00:02:41.210 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.210 [91/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.210 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.210 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.469 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.469 [95/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.469 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.469 [97/705] Linking static target lib/librte_mempool.a 00:02:41.469 [98/705] Linking static target lib/librte_rcu.a 00:02:41.469 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.469 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.469 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.469 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.469 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.469 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.728 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.728 [106/705] Linking static target lib/librte_meter.a 00:02:41.728 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:41.728 [108/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.728 [109/705] Linking static target lib/librte_net.a 00:02:41.728 [110/705] Linking static target lib/librte_mbuf.a 00:02:41.728 [111/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.728 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.728 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.728 [114/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.986 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.986 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.986 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.986 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.244 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.244 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.502 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.502 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.502 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.502 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.502 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.502 [126/705] Linking static target lib/librte_pci.a 00:02:42.502 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.502 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.502 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.761 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.761 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.761 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.761 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.761 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.761 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.761 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.761 [137/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.761 [138/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.761 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.761 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.761 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.761 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.019 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.019 [144/705] Linking static target lib/librte_cmdline.a 00:02:43.019 [145/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.276 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:43.276 [147/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.276 [148/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:43.276 [149/705] Linking static target lib/librte_metrics.a 00:02:43.534 [150/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.534 [151/705] Linking static target lib/librte_timer.a 00:02:43.534 [152/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.534 [153/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.534 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.534 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.793 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.793 [157/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.793 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.793 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.051 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.310 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.310 [162/705] Linking static target lib/librte_bitratestats.a 00:02:44.310 [163/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:44.310 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.310 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.310 [166/705] Linking static target lib/librte_bbdev.a 00:02:44.568 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.568 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.568 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:44.826 [170/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.826 [171/705] Linking static target lib/librte_hash.a 00:02:44.826 [172/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.826 [173/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:44.826 [174/705] Linking static target lib/acl/libavx2_tmp.a 00:02:44.826 [175/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.084 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.084 [177/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.084 [178/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.084 [179/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.084 [180/705] Linking target lib/librte_eal.so.24.0 00:02:45.084 [181/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.084 [182/705] Linking static target lib/librte_ethdev.a 00:02:45.084 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.084 [184/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:45.085 [185/705] Linking target lib/librte_ring.so.24.0 00:02:45.343 [186/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.343 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:45.343 [188/705] Linking target lib/librte_meter.so.24.0 00:02:45.343 [189/705] Linking target lib/librte_pci.so.24.0 00:02:45.343 [190/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:45.343 [191/705] Linking target lib/librte_rcu.so.24.0 00:02:45.343 [192/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.343 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.343 [194/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:45.343 [195/705] Linking target lib/librte_timer.so.24.0 00:02:45.343 [196/705] Linking target lib/librte_mempool.so.24.0 00:02:45.343 [197/705] Linking static target lib/librte_cfgfile.a 00:02:45.343 [198/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:45.343 [199/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:45.343 [200/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.343 [201/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:45.343 [202/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:45.602 [203/705] Linking target lib/librte_mbuf.so.24.0 00:02:45.602 [204/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:45.602 [205/705] Linking static target lib/librte_bpf.a 00:02:45.602 [206/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.602 [207/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:45.602 [208/705] Linking static target lib/librte_compressdev.a 00:02:45.602 [209/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.602 [210/705] Linking target lib/librte_bbdev.so.24.0 00:02:45.602 [211/705] Linking target lib/librte_net.so.24.0 00:02:45.602 [212/705] Linking target lib/librte_cfgfile.so.24.0 00:02:45.602 [213/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:45.602 [214/705] Linking static target lib/librte_acl.a 00:02:45.602 [215/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.602 [216/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:45.602 [217/705] Linking target lib/librte_cmdline.so.24.0 00:02:45.861 [218/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.861 [219/705] Linking target lib/librte_hash.so.24.0 00:02:45.861 [220/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.861 [221/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.861 [222/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:45.861 [223/705] Linking target lib/librte_acl.so.24.0 00:02:45.861 [224/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:45.861 [225/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.861 [226/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:45.861 [227/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:45.861 [228/705] Linking target lib/librte_compressdev.so.24.0 00:02:46.119 [229/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.119 [230/705] Linking static target lib/librte_distributor.a 00:02:46.119 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.119 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.119 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:46.119 [234/705] Linking target lib/librte_distributor.so.24.0 00:02:46.377 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.377 [236/705] Linking static target lib/librte_dmadev.a 00:02:46.377 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:46.635 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:46.635 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.635 [240/705] Linking target lib/librte_dmadev.so.24.0 00:02:46.635 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:46.635 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:46.635 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:46.892 [244/705] Linking static target lib/librte_efd.a 00:02:46.892 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:46.892 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.892 [247/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.892 [248/705] Linking target lib/librte_efd.so.24.0 00:02:46.892 [249/705] Linking static target lib/librte_cryptodev.a 00:02:47.149 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:47.149 [251/705] Linking static target lib/librte_dispatcher.a 00:02:47.149 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:47.149 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:47.149 [254/705] Linking static target lib/librte_gpudev.a 00:02:47.407 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:47.408 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:47.408 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.408 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:47.408 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:47.716 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:47.716 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:47.716 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:47.716 [263/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.716 [264/705] Linking static target lib/librte_gro.a 00:02:47.716 [265/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:47.716 [266/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.716 [267/705] Linking target lib/librte_gpudev.so.24.0 00:02:47.974 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:47.974 [269/705] Linking target lib/librte_cryptodev.so.24.0 00:02:47.974 [270/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.974 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:47.974 [272/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:47.974 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:47.974 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:47.974 [275/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:47.974 [276/705] Linking static target lib/librte_eventdev.a 00:02:47.974 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:48.231 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:48.231 [279/705] Linking static target lib/librte_gso.a 00:02:48.231 [280/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.231 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:48.231 [282/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:48.498 [283/705] Linking static target lib/librte_jobstats.a 00:02:48.498 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:48.498 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:48.498 [286/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:48.498 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:48.498 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:48.498 [289/705] Linking static target lib/librte_ip_frag.a 00:02:48.498 [290/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.498 [291/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.498 [292/705] Linking target lib/librte_ethdev.so.24.0 00:02:48.498 [293/705] Linking target lib/librte_jobstats.so.24.0 00:02:48.498 [294/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:48.772 [295/705] Linking static target lib/librte_latencystats.a 00:02:48.772 [296/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:48.772 [297/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.772 [298/705] Linking target lib/librte_metrics.so.24.0 00:02:48.772 [299/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:48.772 [300/705] Linking target lib/librte_bpf.so.24.0 00:02:48.772 [301/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:48.772 [302/705] Linking target lib/librte_gro.so.24.0 00:02:48.772 [303/705] Linking target lib/librte_gso.so.24.0 00:02:48.772 [304/705] Linking target lib/librte_ip_frag.so.24.0 00:02:48.772 [305/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:48.772 [306/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.772 [307/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:48.772 [308/705] Linking target lib/librte_bitratestats.so.24.0 00:02:48.772 [309/705] Linking target lib/librte_latencystats.so.24.0 00:02:48.772 [310/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:48.772 [311/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.030 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.030 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:49.030 [314/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:49.030 [315/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.288 [316/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.288 [317/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:49.288 [318/705] Linking static target lib/librte_lpm.a 00:02:49.288 [319/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.288 [320/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:49.288 [321/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:49.288 [322/705] Linking static target lib/librte_pcapng.a 00:02:49.288 [323/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.546 [324/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.546 [325/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.546 [326/705] Linking target lib/librte_lpm.so.24.0 00:02:49.546 [327/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.546 [328/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.546 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.546 [330/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.546 [331/705] Linking target lib/librte_eventdev.so.24.0 00:02:49.546 [332/705] Linking target lib/librte_pcapng.so.24.0 00:02:49.546 [333/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:49.546 [334/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:49.546 [335/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:49.546 [336/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.546 [337/705] Linking target lib/librte_dispatcher.so.24.0 00:02:49.804 [338/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:49.804 [339/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.804 [340/705] Linking static target lib/librte_power.a 00:02:49.804 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:49.804 [342/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:49.804 [343/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:49.804 [344/705] Linking static target lib/librte_rawdev.a 00:02:49.804 [345/705] Linking static target lib/librte_regexdev.a 00:02:49.804 [346/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:49.804 [347/705] Linking static target lib/librte_member.a 00:02:49.804 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:49.804 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:50.062 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:50.062 [351/705] Linking static target lib/librte_mldev.a 00:02:50.062 [352/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.062 [353/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:50.062 [354/705] Linking target lib/librte_member.so.24.0 00:02:50.062 [355/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.062 [356/705] Linking target lib/librte_rawdev.so.24.0 00:02:50.062 [357/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:50.062 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:50.062 [359/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.321 [360/705] Linking target lib/librte_power.so.24.0 00:02:50.321 [361/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.321 [362/705] Linking static target lib/librte_reorder.a 00:02:50.321 [363/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.321 [364/705] Linking target lib/librte_regexdev.so.24.0 00:02:50.321 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:50.321 [366/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:50.321 [367/705] Linking static target lib/librte_rib.a 00:02:50.321 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:50.321 [369/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.321 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:50.580 [371/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.580 [372/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:50.580 [373/705] Linking static target lib/librte_stack.a 00:02:50.580 [374/705] Linking target lib/librte_reorder.so.24.0 00:02:50.580 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.580 [376/705] Linking static target lib/librte_security.a 00:02:50.580 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:50.580 [378/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.580 [379/705] Linking target lib/librte_rib.so.24.0 00:02:50.580 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.580 [381/705] Linking target lib/librte_stack.so.24.0 00:02:50.580 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:50.837 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.837 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.837 [385/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.837 [386/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.837 [387/705] Linking target lib/librte_mldev.so.24.0 00:02:50.837 [388/705] Linking target lib/librte_security.so.24.0 00:02:50.837 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.096 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:51.096 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:51.096 [392/705] Linking static target lib/librte_sched.a 00:02:51.096 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.096 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.354 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.354 [396/705] Linking target lib/librte_sched.so.24.0 00:02:51.354 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.354 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:51.354 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:51.613 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:51.613 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.613 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:51.613 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:51.613 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:51.871 [405/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:51.871 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:51.871 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:51.871 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:52.130 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:52.130 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:52.130 [411/705] Linking static target lib/librte_ipsec.a 00:02:52.130 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:52.130 [413/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:52.389 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.389 [415/705] Linking target lib/librte_ipsec.so.24.0 00:02:52.389 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:52.389 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:52.389 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:52.389 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:52.648 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:52.648 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:52.648 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:52.907 [423/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:52.907 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:52.907 [425/705] Linking static target lib/librte_fib.a 00:02:52.907 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:52.907 [427/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:52.907 [428/705] Linking static target lib/librte_pdcp.a 00:02:53.165 [429/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.165 [430/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.165 [431/705] Linking target lib/librte_fib.so.24.0 00:02:53.165 [432/705] Linking target lib/librte_pdcp.so.24.0 00:02:53.165 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:53.165 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:53.165 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:53.424 [436/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:53.424 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:53.424 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:53.682 [439/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:53.682 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:53.682 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:53.682 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:53.682 [443/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:53.941 [444/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:53.941 [445/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:53.941 [446/705] Linking static target lib/librte_port.a 00:02:53.941 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:53.941 [448/705] Linking static target lib/librte_pdump.a 00:02:53.941 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:53.941 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:54.200 [451/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.200 [452/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:54.200 [453/705] Linking target lib/librte_pdump.so.24.0 00:02:54.200 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.200 [455/705] Linking target lib/librte_port.so.24.0 00:02:54.459 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:54.459 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:54.459 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:54.459 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:54.459 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:54.459 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:54.459 [462/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.718 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:54.718 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:54.718 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:54.718 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:54.718 [467/705] Linking static target lib/librte_table.a 00:02:54.977 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:54.977 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:55.235 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:55.235 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:55.235 [472/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.235 [473/705] Linking target lib/librte_table.so.24.0 00:02:55.494 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:55.494 [475/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:55.494 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:55.494 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:55.494 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:55.753 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:55.753 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:55.753 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:55.753 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.012 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:56.012 [484/705] Linking static target lib/librte_graph.a 00:02:56.012 [485/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:56.012 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.012 [487/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.012 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:56.271 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.271 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.271 [491/705] Linking target lib/librte_graph.so.24.0 00:02:56.271 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:56.582 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:56.582 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:56.582 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:56.582 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:56.582 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.582 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:56.841 [499/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:56.841 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.841 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:56.841 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.102 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:57.102 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.102 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.102 [506/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:57.102 [507/705] Linking static target lib/librte_node.a 00:02:57.102 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.102 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.102 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.102 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.362 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.362 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.362 [514/705] Linking target lib/librte_node.so.24.0 00:02:57.362 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.362 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.362 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.362 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.362 [519/705] Linking static target drivers/librte_bus_pci.a 00:02:57.620 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.620 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.620 [522/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:57.620 [523/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.620 [524/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.620 [525/705] Linking static target drivers/librte_bus_vdev.a 00:02:57.620 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:57.620 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.620 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.620 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:57.880 [530/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.880 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.880 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.880 [533/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:57.880 [534/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:57.880 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:57.880 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:57.880 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.880 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.880 [539/705] Linking static target drivers/librte_mempool_ring.a 00:02:57.880 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.880 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.138 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.397 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.397 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.397 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.656 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.223 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.223 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.223 [549/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.223 [550/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.223 [551/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:59.223 [552/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:59.223 [553/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:59.481 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:59.481 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:59.481 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:59.761 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:59.761 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:59.761 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:00.019 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.019 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:00.278 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:00.278 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:00.278 [564/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:00.278 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:00.278 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:00.278 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:00.536 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:00.536 [569/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:00.536 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:00.536 [571/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:00.536 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:00.536 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:00.793 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:00.793 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:00.793 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.052 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:01.052 [578/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:01.052 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.052 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:01.310 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:01.310 [582/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.310 [583/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.310 [584/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:01.310 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:01.310 [586/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.310 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:01.310 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:01.569 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.569 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:01.828 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:01.828 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:01.828 [593/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:01.828 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:01.828 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:01.828 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:02.086 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:02.345 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:02.345 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:02.345 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:02.345 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:02.345 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:02.345 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:02.345 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:02.604 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.604 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.604 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.604 [608/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:02.604 [609/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.863 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.863 [611/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.863 [612/705] Linking static target lib/librte_vhost.a 00:03:02.863 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:03.122 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:03.122 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:03.122 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:03.380 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.638 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:03.638 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.638 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.638 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.638 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.897 [623/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.897 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:03.897 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.897 [626/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.897 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.897 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:04.155 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.155 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:04.155 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:04.155 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:04.414 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:04.414 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:04.414 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:04.414 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:04.414 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:04.684 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:04.684 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:04.684 [640/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:04.684 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:04.684 [642/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:04.956 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:04.956 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.956 [645/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.956 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.956 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:04.956 [648/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:04.956 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:04.956 [650/705] Linking static target lib/librte_pipeline.a 00:03:05.215 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:05.215 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:05.215 [653/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:05.474 [654/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:05.474 [655/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:05.474 [656/705] Linking target app/dpdk-dumpcap 00:03:05.474 [657/705] Linking target app/dpdk-graph 00:03:05.474 [658/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:05.732 [659/705] Linking target app/dpdk-pdump 00:03:05.732 [660/705] Linking target app/dpdk-proc-info 00:03:05.732 [661/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:05.732 [662/705] Linking target app/dpdk-test-acl 00:03:05.732 [663/705] Linking target app/dpdk-test-compress-perf 00:03:05.990 [664/705] Linking target app/dpdk-test-cmdline 00:03:05.990 [665/705] Linking target app/dpdk-test-dma-perf 00:03:05.990 [666/705] Linking target app/dpdk-test-crypto-perf 00:03:05.990 [667/705] Linking target app/dpdk-test-eventdev 00:03:05.990 [668/705] Linking target app/dpdk-test-bbdev 00:03:05.990 [669/705] Linking target app/dpdk-test-fib 00:03:06.248 [670/705] Linking target app/dpdk-test-gpudev 00:03:06.248 [671/705] Linking target app/dpdk-test-flow-perf 00:03:06.248 [672/705] Linking target app/dpdk-test-mldev 00:03:06.248 [673/705] Linking target app/dpdk-test-pipeline 00:03:06.248 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:06.505 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:06.505 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:06.762 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:06.762 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:06.762 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:06.762 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:06.762 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:06.762 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.019 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:07.019 [684/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.019 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:07.019 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:07.019 [687/705] Linking target lib/librte_pipeline.so.24.0 00:03:07.277 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.277 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:07.277 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:07.535 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:07.535 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:07.535 [693/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:07.792 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:07.792 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:07.792 [696/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:07.792 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:08.050 [698/705] Linking target app/dpdk-test-sad 00:03:08.050 [699/705] Linking target app/dpdk-test-regex 00:03:08.050 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.050 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:08.307 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:08.307 [703/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:08.566 [704/705] Linking target app/dpdk-testpmd 00:03:08.566 [705/705] Linking target app/dpdk-test-security-perf 00:03:08.566 23:57:23 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:08.825 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.825 [0/1] Installing files. 00:03:08.825 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.826 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.087 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.088 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.089 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.089 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.089 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.090 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:09.352 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:09.352 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:09.352 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.352 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:09.352 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.355 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.356 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:09.356 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:09.356 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:09.356 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:09.356 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:09.356 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:09.356 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:09.356 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:09.356 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:09.356 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:09.356 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:09.356 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:09.356 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:09.356 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:09.356 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:09.356 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:09.356 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:09.356 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:09.356 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:09.356 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:09.356 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:09.356 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:09.356 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:09.356 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:09.356 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:09.356 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:09.356 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:09.356 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:09.356 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:09.356 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:09.356 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:09.356 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:09.356 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:09.356 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:09.357 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:09.357 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:09.357 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:09.357 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:09.357 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:09.357 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:09.357 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:09.357 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:09.357 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:09.357 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:09.357 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:09.357 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:09.357 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:09.357 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:09.357 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:09.357 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:09.357 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:09.357 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:09.357 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:09.357 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:09.357 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:09.357 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:09.357 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:09.357 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:09.357 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:09.357 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:09.357 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:09.357 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:09.357 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:09.357 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:09.357 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:09.357 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:09.357 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:09.357 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:09.357 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:09.357 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:09.357 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:09.357 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:09.357 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:09.357 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:09.357 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:09.357 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:09.357 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:09.357 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:09.357 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:09.357 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:09.357 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:09.357 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:09.357 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:09.357 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:09.357 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:09.357 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:09.357 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:09.357 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:09.357 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:09.357 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:09.357 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:09.357 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:09.357 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:09.357 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:09.357 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:09.357 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:09.357 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:09.357 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:09.357 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:09.357 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:09.357 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:09.357 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:09.357 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:09.357 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:09.357 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:09.357 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:09.357 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:09.357 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:09.357 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:09.357 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:09.357 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:09.357 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:09.357 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:09.357 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:09.357 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:09.357 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:09.357 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:09.357 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:09.357 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:09.357 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:09.357 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:09.358 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:09.358 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:09.358 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:09.358 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:09.358 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:09.358 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:09.358 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:09.358 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:09.358 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:09.358 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:09.358 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:09.358 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:09.358 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:09.358 23:57:23 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:09.358 23:57:23 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:09.358 23:57:23 -- common/autobuild_common.sh@203 -- $ cat 00:03:09.358 23:57:23 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:09.358 00:03:09.358 real 0m36.940s 00:03:09.358 user 4m17.838s 00:03:09.358 sys 0m36.977s 00:03:09.358 23:57:23 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:09.358 23:57:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.358 ************************************ 00:03:09.358 END TEST build_native_dpdk 00:03:09.358 ************************************ 00:03:09.358 23:57:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.358 23:57:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.358 23:57:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:09.358 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.617 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.617 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:09.617 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:09.875 Using 'verbs' RDMA provider 00:03:20.854 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:30.837 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:31.404 Creating mk/config.mk...done. 00:03:31.404 Creating mk/cc.flags.mk...done. 00:03:31.404 Type 'make' to build. 00:03:31.404 23:57:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:31.404 23:57:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:31.404 23:57:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:31.404 23:57:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.404 ************************************ 00:03:31.404 START TEST make 00:03:31.404 ************************************ 00:03:31.404 23:57:45 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:31.404 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:31.404 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:31.404 meson setup builddir \ 00:03:31.404 -Dwith-libaio=enabled \ 00:03:31.404 -Dwith-liburing=enabled \ 00:03:31.404 -Dwith-libvfn=disabled \ 00:03:31.404 -Dwith-spdk=false && \ 00:03:31.404 meson compile -C builddir && \ 00:03:31.404 cd -) 00:03:31.404 make[1]: Nothing to be done for 'all'. 00:03:33.935 The Meson build system 00:03:33.935 Version: 1.5.0 00:03:33.935 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:33.935 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:33.935 Build type: native build 00:03:33.935 Project name: xnvme 00:03:33.935 Project version: 0.7.3 00:03:33.935 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:33.935 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:33.935 Host machine cpu family: x86_64 00:03:33.935 Host machine cpu: x86_64 00:03:33.935 Message: host_machine.system: linux 00:03:33.935 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:33.935 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:33.935 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:33.935 Run-time dependency threads found: YES 00:03:33.935 Has header "setupapi.h" : NO 00:03:33.935 Has header "linux/blkzoned.h" : YES 00:03:33.935 Has header "linux/blkzoned.h" : YES (cached) 00:03:33.935 Has header "libaio.h" : YES 00:03:33.935 Library aio found: YES 00:03:33.935 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:33.935 Run-time dependency liburing found: YES 2.2 00:03:33.935 Dependency libvfn skipped: feature with-libvfn disabled 00:03:33.935 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.935 Run-time dependency appleframeworks found: NO (tried framework) 00:03:33.935 Configuring xnvme_config.h using configuration 00:03:33.935 Configuring xnvme.spec using configuration 00:03:33.935 Run-time dependency bash-completion found: YES 2.11 00:03:33.935 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:33.935 Program cp found: YES (/usr/bin/cp) 00:03:33.935 Has header "winsock2.h" : NO 00:03:33.935 Has header "dbghelp.h" : NO 00:03:33.935 Library rpcrt4 found: NO 00:03:33.935 Library rt found: YES 00:03:33.935 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:33.935 Found CMake: /usr/bin/cmake (3.27.7) 00:03:33.935 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:33.935 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:33.935 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:33.935 Build targets in project: 32 00:03:33.935 00:03:33.935 xnvme 0.7.3 00:03:33.935 00:03:33.935 User defined options 00:03:33.935 with-libaio : enabled 00:03:33.935 with-liburing: enabled 00:03:33.935 with-libvfn : disabled 00:03:33.935 with-spdk : false 00:03:33.935 00:03:33.935 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:33.935 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:33.935 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.194 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:34.194 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:34.194 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:34.194 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.194 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:34.194 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.194 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:34.194 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.194 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:34.194 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.194 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:34.194 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:34.194 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:34.194 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:34.194 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:34.194 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:34.194 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.194 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:34.194 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:34.194 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:34.452 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:34.452 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:34.452 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:34.452 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:34.452 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:34.452 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:34.452 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:34.452 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:34.452 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:34.452 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:34.452 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:34.452 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:34.452 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:34.452 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:34.452 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:34.452 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:34.452 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:34.452 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:34.452 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:34.452 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:34.452 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:34.452 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:34.452 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:34.452 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:34.452 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.452 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:34.452 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:34.452 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:34.452 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:34.452 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:34.452 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:34.452 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:34.452 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:34.452 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:34.452 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:34.452 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:34.711 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:34.711 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:34.711 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:34.711 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:34.711 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:34.711 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:34.711 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:34.711 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:34.711 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:34.711 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:34.711 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:34.711 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:34.711 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.711 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:34.711 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:34.711 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:34.711 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.711 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:34.711 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:34.711 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:34.711 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:34.711 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:34.969 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.969 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:34.969 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:34.969 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.969 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:34.969 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:34.969 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:34.969 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:34.969 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:34.969 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:34.969 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:34.969 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.969 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:34.969 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:34.969 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:34.969 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:34.969 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:34.969 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:34.969 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:34.969 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:34.969 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:34.969 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:35.228 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:35.228 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:35.228 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:35.228 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:35.228 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:35.228 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:35.228 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:35.228 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:35.228 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:35.228 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:35.228 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:35.228 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:35.228 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:35.228 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.228 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:35.228 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:35.228 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:35.228 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:35.228 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:35.228 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:35.228 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:35.228 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:35.228 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:35.228 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:35.228 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:35.228 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:35.228 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:35.228 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:35.228 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:35.228 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:35.228 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:35.228 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:35.486 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:35.486 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:35.486 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:35.486 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:35.486 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:35.486 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:35.486 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:35.486 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:35.486 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:35.486 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:35.486 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:35.486 [145/203] Linking target lib/libxnvme.so 00:03:35.486 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:35.486 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:35.486 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:35.758 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:35.758 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:35.758 [151/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:35.758 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:35.758 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:35.758 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:35.758 [155/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:35.758 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:35.758 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:35.758 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:35.758 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:35.758 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:35.758 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:35.758 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:35.758 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:35.758 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:35.758 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:35.758 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:36.047 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:36.047 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:36.047 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:36.047 [170/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:36.047 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:36.047 [172/203] Linking static target lib/libxnvme.a 00:03:36.047 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:36.047 [174/203] Linking target tests/xnvme_tests_ioworker 00:03:36.047 [175/203] Linking target tests/xnvme_tests_lblk 00:03:36.047 [176/203] Linking target tests/xnvme_tests_buf 00:03:36.047 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:36.047 [178/203] Linking target tests/xnvme_tests_scc 00:03:36.047 [179/203] Linking target tests/xnvme_tests_znd_append 00:03:36.047 [180/203] Linking target tests/xnvme_tests_cli 00:03:36.047 [181/203] Linking target tests/xnvme_tests_enum 00:03:36.047 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:36.047 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:36.047 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:03:36.047 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:36.047 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:36.047 [187/203] Linking target tools/lblk 00:03:36.047 [188/203] Linking target tests/xnvme_tests_kvs 00:03:36.047 [189/203] Linking target tests/xnvme_tests_map 00:03:36.047 [190/203] Linking target tools/xnvme_file 00:03:36.047 [191/203] Linking target examples/xnvme_enum 00:03:36.047 [192/203] Linking target tools/xdd 00:03:36.047 [193/203] Linking target tools/kvs 00:03:36.047 [194/203] Linking target examples/xnvme_dev 00:03:36.047 [195/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:36.047 [196/203] Linking target tools/zoned 00:03:36.047 [197/203] Linking target examples/xnvme_io_async 00:03:36.047 [198/203] Linking target examples/xnvme_single_async 00:03:36.047 [199/203] Linking target examples/zoned_io_sync 00:03:36.047 [200/203] Linking target examples/xnvme_hello 00:03:36.047 [201/203] Linking target examples/xnvme_single_sync 00:03:36.047 [202/203] Linking target examples/zoned_io_async 00:03:36.047 [203/203] Linking target tools/xnvme 00:03:36.047 INFO: autodetecting backend as ninja 00:03:36.047 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:36.305 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:48.505 CC lib/log/log_flags.o 00:03:48.505 CC lib/log/log.o 00:03:48.505 CC lib/log/log_deprecated.o 00:03:48.505 CC lib/ut_mock/mock.o 00:03:48.505 CC lib/ut/ut.o 00:03:48.505 LIB libspdk_ut_mock.a 00:03:48.505 LIB libspdk_ut.a 00:03:48.505 SO libspdk_ut_mock.so.5.0 00:03:48.505 LIB libspdk_log.a 00:03:48.505 SO libspdk_ut.so.1.0 00:03:48.505 SO libspdk_log.so.6.1 00:03:48.505 SYMLINK libspdk_ut_mock.so 00:03:48.505 SYMLINK libspdk_log.so 00:03:48.505 SYMLINK libspdk_ut.so 00:03:48.505 CC lib/util/base64.o 00:03:48.505 CC lib/util/bit_array.o 00:03:48.505 CC lib/util/crc16.o 00:03:48.505 CC lib/util/cpuset.o 00:03:48.505 CC lib/util/crc32.o 00:03:48.505 CC lib/util/crc32c.o 00:03:48.505 CXX lib/trace_parser/trace.o 00:03:48.505 CC lib/dma/dma.o 00:03:48.505 CC lib/ioat/ioat.o 00:03:48.505 CC lib/vfio_user/host/vfio_user_pci.o 00:03:48.505 CC lib/util/crc32_ieee.o 00:03:48.505 CC lib/util/crc64.o 00:03:48.505 CC lib/util/dif.o 00:03:48.505 CC lib/util/fd.o 00:03:48.505 CC lib/vfio_user/host/vfio_user.o 00:03:48.505 CC lib/util/file.o 00:03:48.505 CC lib/util/hexlify.o 00:03:48.505 CC lib/util/iov.o 00:03:48.505 LIB libspdk_dma.a 00:03:48.505 SO libspdk_dma.so.3.0 00:03:48.505 CC lib/util/math.o 00:03:48.505 CC lib/util/pipe.o 00:03:48.505 CC lib/util/strerror_tls.o 00:03:48.505 SYMLINK libspdk_dma.so 00:03:48.505 CC lib/util/string.o 00:03:48.505 LIB libspdk_ioat.a 00:03:48.505 SO libspdk_ioat.so.6.0 00:03:48.505 CC lib/util/uuid.o 00:03:48.505 LIB libspdk_vfio_user.a 00:03:48.505 SYMLINK libspdk_ioat.so 00:03:48.505 CC lib/util/fd_group.o 00:03:48.505 SO libspdk_vfio_user.so.4.0 00:03:48.505 CC lib/util/xor.o 00:03:48.505 CC lib/util/zipf.o 00:03:48.763 SYMLINK libspdk_vfio_user.so 00:03:48.763 LIB libspdk_util.a 00:03:48.763 SO libspdk_util.so.8.0 00:03:49.021 SYMLINK libspdk_util.so 00:03:49.021 CC lib/conf/conf.o 00:03:49.021 CC lib/idxd/idxd.o 00:03:49.021 CC lib/idxd/idxd_kernel.o 00:03:49.021 CC lib/idxd/idxd_user.o 00:03:49.021 CC lib/json/json_parse.o 00:03:49.021 CC lib/json/json_util.o 00:03:49.021 CC lib/rdma/common.o 00:03:49.021 CC lib/env_dpdk/env.o 00:03:49.021 CC lib/vmd/vmd.o 00:03:49.021 LIB libspdk_trace_parser.a 00:03:49.021 SO libspdk_trace_parser.so.4.0 00:03:49.281 CC lib/vmd/led.o 00:03:49.281 LIB libspdk_conf.a 00:03:49.281 SYMLINK libspdk_trace_parser.so 00:03:49.281 CC lib/env_dpdk/memory.o 00:03:49.281 CC lib/env_dpdk/pci.o 00:03:49.281 SO libspdk_conf.so.5.0 00:03:49.281 CC lib/json/json_write.o 00:03:49.281 CC lib/env_dpdk/init.o 00:03:49.281 SYMLINK libspdk_conf.so 00:03:49.281 CC lib/env_dpdk/threads.o 00:03:49.281 CC lib/env_dpdk/pci_ioat.o 00:03:49.281 CC lib/rdma/rdma_verbs.o 00:03:49.281 CC lib/env_dpdk/pci_virtio.o 00:03:49.542 CC lib/env_dpdk/pci_vmd.o 00:03:49.542 CC lib/env_dpdk/pci_idxd.o 00:03:49.542 LIB libspdk_rdma.a 00:03:49.543 CC lib/env_dpdk/pci_event.o 00:03:49.543 SO libspdk_rdma.so.5.0 00:03:49.543 CC lib/env_dpdk/sigbus_handler.o 00:03:49.543 LIB libspdk_json.a 00:03:49.543 SO libspdk_json.so.5.1 00:03:49.543 SYMLINK libspdk_rdma.so 00:03:49.543 CC lib/env_dpdk/pci_dpdk.o 00:03:49.543 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.543 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.543 SYMLINK libspdk_json.so 00:03:49.543 LIB libspdk_idxd.a 00:03:49.543 SO libspdk_idxd.so.11.0 00:03:49.801 SYMLINK libspdk_idxd.so 00:03:49.801 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.801 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.801 LIB libspdk_vmd.a 00:03:49.801 SO libspdk_vmd.so.5.0 00:03:49.801 SYMLINK libspdk_vmd.so 00:03:50.061 LIB libspdk_jsonrpc.a 00:03:50.061 SO libspdk_jsonrpc.so.5.1 00:03:50.061 SYMLINK libspdk_jsonrpc.so 00:03:50.320 CC lib/rpc/rpc.o 00:03:50.320 LIB libspdk_env_dpdk.a 00:03:50.320 SO libspdk_env_dpdk.so.13.0 00:03:50.320 LIB libspdk_rpc.a 00:03:50.320 SO libspdk_rpc.so.5.0 00:03:50.320 SYMLINK libspdk_rpc.so 00:03:50.580 SYMLINK libspdk_env_dpdk.so 00:03:50.580 CC lib/trace/trace.o 00:03:50.580 CC lib/trace/trace_rpc.o 00:03:50.581 CC lib/trace/trace_flags.o 00:03:50.581 CC lib/notify/notify.o 00:03:50.581 CC lib/sock/sock_rpc.o 00:03:50.581 CC lib/notify/notify_rpc.o 00:03:50.581 CC lib/sock/sock.o 00:03:50.581 LIB libspdk_notify.a 00:03:50.840 SO libspdk_notify.so.5.0 00:03:50.840 LIB libspdk_trace.a 00:03:50.840 SO libspdk_trace.so.9.0 00:03:50.840 SYMLINK libspdk_notify.so 00:03:50.840 SYMLINK libspdk_trace.so 00:03:50.840 LIB libspdk_sock.a 00:03:50.840 SO libspdk_sock.so.8.0 00:03:51.098 CC lib/thread/thread.o 00:03:51.098 CC lib/thread/iobuf.o 00:03:51.098 SYMLINK libspdk_sock.so 00:03:51.098 CC lib/nvme/nvme_fabric.o 00:03:51.098 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.098 CC lib/nvme/nvme_ctrlr.o 00:03:51.098 CC lib/nvme/nvme_ns_cmd.o 00:03:51.098 CC lib/nvme/nvme_pcie.o 00:03:51.098 CC lib/nvme/nvme_qpair.o 00:03:51.098 CC lib/nvme/nvme_ns.o 00:03:51.098 CC lib/nvme/nvme_pcie_common.o 00:03:51.356 CC lib/nvme/nvme.o 00:03:51.613 CC lib/nvme/nvme_quirks.o 00:03:51.871 CC lib/nvme/nvme_transport.o 00:03:51.871 CC lib/nvme/nvme_discovery.o 00:03:51.871 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.871 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.871 CC lib/nvme/nvme_tcp.o 00:03:51.871 CC lib/nvme/nvme_opal.o 00:03:52.128 CC lib/nvme/nvme_io_msg.o 00:03:52.128 CC lib/nvme/nvme_poll_group.o 00:03:52.385 CC lib/nvme/nvme_zns.o 00:03:52.385 CC lib/nvme/nvme_cuse.o 00:03:52.385 CC lib/nvme/nvme_vfio_user.o 00:03:52.385 CC lib/nvme/nvme_rdma.o 00:03:52.385 LIB libspdk_thread.a 00:03:52.385 SO libspdk_thread.so.9.0 00:03:52.385 SYMLINK libspdk_thread.so 00:03:52.643 CC lib/blob/blobstore.o 00:03:52.643 CC lib/accel/accel.o 00:03:52.643 CC lib/accel/accel_rpc.o 00:03:52.643 CC lib/accel/accel_sw.o 00:03:52.900 CC lib/blob/request.o 00:03:52.901 CC lib/blob/zeroes.o 00:03:52.901 CC lib/init/json_config.o 00:03:52.901 CC lib/init/subsystem.o 00:03:52.901 CC lib/init/subsystem_rpc.o 00:03:52.901 CC lib/blob/blob_bs_dev.o 00:03:52.901 CC lib/init/rpc.o 00:03:53.158 CC lib/virtio/virtio.o 00:03:53.158 CC lib/virtio/virtio_vhost_user.o 00:03:53.158 CC lib/virtio/virtio_vfio_user.o 00:03:53.158 LIB libspdk_init.a 00:03:53.158 SO libspdk_init.so.4.0 00:03:53.158 CC lib/virtio/virtio_pci.o 00:03:53.158 SYMLINK libspdk_init.so 00:03:53.158 CC lib/event/app.o 00:03:53.158 CC lib/event/reactor.o 00:03:53.417 CC lib/event/log_rpc.o 00:03:53.417 CC lib/event/app_rpc.o 00:03:53.417 CC lib/event/scheduler_static.o 00:03:53.417 LIB libspdk_virtio.a 00:03:53.417 SO libspdk_virtio.so.6.0 00:03:53.417 SYMLINK libspdk_virtio.so 00:03:53.676 LIB libspdk_nvme.a 00:03:53.676 LIB libspdk_accel.a 00:03:53.676 SO libspdk_accel.so.14.0 00:03:53.676 LIB libspdk_event.a 00:03:53.676 SYMLINK libspdk_accel.so 00:03:53.676 SO libspdk_nvme.so.12.0 00:03:53.676 SO libspdk_event.so.12.0 00:03:53.935 SYMLINK libspdk_event.so 00:03:53.935 CC lib/bdev/bdev_zone.o 00:03:53.935 CC lib/bdev/bdev_rpc.o 00:03:53.935 CC lib/bdev/part.o 00:03:53.935 CC lib/bdev/bdev.o 00:03:53.935 CC lib/bdev/scsi_nvme.o 00:03:53.935 SYMLINK libspdk_nvme.so 00:03:54.870 LIB libspdk_blob.a 00:03:55.127 SO libspdk_blob.so.10.1 00:03:55.127 SYMLINK libspdk_blob.so 00:03:55.127 CC lib/lvol/lvol.o 00:03:55.127 CC lib/blobfs/blobfs.o 00:03:55.127 CC lib/blobfs/tree.o 00:03:56.063 LIB libspdk_blobfs.a 00:03:56.063 SO libspdk_blobfs.so.9.0 00:03:56.063 SYMLINK libspdk_blobfs.so 00:03:56.321 LIB libspdk_lvol.a 00:03:56.321 SO libspdk_lvol.so.9.1 00:03:56.321 SYMLINK libspdk_lvol.so 00:03:56.580 LIB libspdk_bdev.a 00:03:56.580 SO libspdk_bdev.so.14.0 00:03:56.580 SYMLINK libspdk_bdev.so 00:03:56.839 CC lib/scsi/dev.o 00:03:56.839 CC lib/scsi/lun.o 00:03:56.839 CC lib/scsi/port.o 00:03:56.839 CC lib/scsi/scsi.o 00:03:56.839 CC lib/scsi/scsi_bdev.o 00:03:56.839 CC lib/scsi/scsi_pr.o 00:03:56.839 CC lib/nbd/nbd.o 00:03:56.839 CC lib/nvmf/ctrlr.o 00:03:56.839 CC lib/ublk/ublk.o 00:03:56.839 CC lib/ftl/ftl_core.o 00:03:56.839 CC lib/ftl/ftl_init.o 00:03:56.839 CC lib/nbd/nbd_rpc.o 00:03:56.839 CC lib/ublk/ublk_rpc.o 00:03:57.097 CC lib/ftl/ftl_layout.o 00:03:57.097 CC lib/ftl/ftl_debug.o 00:03:57.097 CC lib/scsi/scsi_rpc.o 00:03:57.097 CC lib/scsi/task.o 00:03:57.097 CC lib/nvmf/ctrlr_discovery.o 00:03:57.097 CC lib/ftl/ftl_io.o 00:03:57.097 LIB libspdk_nbd.a 00:03:57.097 CC lib/ftl/ftl_sb.o 00:03:57.097 CC lib/ftl/ftl_l2p.o 00:03:57.097 SO libspdk_nbd.so.6.0 00:03:57.097 SYMLINK libspdk_nbd.so 00:03:57.097 CC lib/ftl/ftl_l2p_flat.o 00:03:57.097 LIB libspdk_scsi.a 00:03:57.355 CC lib/ftl/ftl_nv_cache.o 00:03:57.355 CC lib/ftl/ftl_band.o 00:03:57.355 SO libspdk_scsi.so.8.0 00:03:57.355 CC lib/ftl/ftl_band_ops.o 00:03:57.355 CC lib/nvmf/ctrlr_bdev.o 00:03:57.355 CC lib/ftl/ftl_writer.o 00:03:57.355 CC lib/ftl/ftl_rq.o 00:03:57.355 LIB libspdk_ublk.a 00:03:57.355 SYMLINK libspdk_scsi.so 00:03:57.355 CC lib/ftl/ftl_reloc.o 00:03:57.355 SO libspdk_ublk.so.2.0 00:03:57.355 SYMLINK libspdk_ublk.so 00:03:57.613 CC lib/iscsi/conn.o 00:03:57.613 CC lib/iscsi/init_grp.o 00:03:57.613 CC lib/nvmf/subsystem.o 00:03:57.613 CC lib/vhost/vhost.o 00:03:57.613 CC lib/nvmf/nvmf.o 00:03:57.613 CC lib/ftl/ftl_l2p_cache.o 00:03:57.613 CC lib/ftl/ftl_p2l.o 00:03:57.613 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.871 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.871 CC lib/iscsi/iscsi.o 00:03:57.871 CC lib/iscsi/md5.o 00:03:57.871 CC lib/iscsi/param.o 00:03:58.130 CC lib/iscsi/portal_grp.o 00:03:58.130 CC lib/iscsi/tgt_node.o 00:03:58.130 CC lib/iscsi/iscsi_subsystem.o 00:03:58.130 CC lib/vhost/vhost_rpc.o 00:03:58.130 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.130 CC lib/nvmf/nvmf_rpc.o 00:03:58.389 CC lib/nvmf/transport.o 00:03:58.389 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.389 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.389 CC lib/iscsi/iscsi_rpc.o 00:03:58.389 CC lib/nvmf/tcp.o 00:03:58.389 CC lib/nvmf/rdma.o 00:03:58.648 CC lib/iscsi/task.o 00:03:58.648 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.648 CC lib/vhost/vhost_scsi.o 00:03:58.648 CC lib/vhost/vhost_blk.o 00:03:58.648 CC lib/vhost/rte_vhost_user.o 00:03:58.648 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.907 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:59.166 LIB libspdk_iscsi.a 00:03:59.166 CC lib/ftl/utils/ftl_conf.o 00:03:59.166 SO libspdk_iscsi.so.7.0 00:03:59.166 CC lib/ftl/utils/ftl_md.o 00:03:59.166 CC lib/ftl/utils/ftl_mempool.o 00:03:59.166 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.166 CC lib/ftl/utils/ftl_property.o 00:03:59.424 SYMLINK libspdk_iscsi.so 00:03:59.424 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.424 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.424 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.424 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.424 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.424 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.424 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.424 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.424 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.424 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.424 CC lib/ftl/base/ftl_base_dev.o 00:03:59.424 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.424 CC lib/ftl/ftl_trace.o 00:03:59.683 LIB libspdk_vhost.a 00:03:59.683 SO libspdk_vhost.so.7.1 00:03:59.683 SYMLINK libspdk_vhost.so 00:03:59.941 LIB libspdk_ftl.a 00:03:59.941 SO libspdk_ftl.so.8.0 00:04:00.200 SYMLINK libspdk_ftl.so 00:04:00.457 LIB libspdk_nvmf.a 00:04:00.457 SO libspdk_nvmf.so.17.0 00:04:00.714 SYMLINK libspdk_nvmf.so 00:04:00.971 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.971 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.971 CC module/accel/error/accel_error.o 00:04:00.971 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.971 CC module/accel/dsa/accel_dsa.o 00:04:00.971 CC module/accel/iaa/accel_iaa.o 00:04:00.971 CC module/accel/ioat/accel_ioat.o 00:04:00.971 CC module/blob/bdev/blob_bdev.o 00:04:00.971 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.971 CC module/sock/posix/posix.o 00:04:00.971 LIB libspdk_env_dpdk_rpc.a 00:04:00.971 SO libspdk_env_dpdk_rpc.so.5.0 00:04:00.971 LIB libspdk_scheduler_gscheduler.a 00:04:00.971 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.971 SO libspdk_scheduler_gscheduler.so.3.0 00:04:00.971 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:00.971 CC module/accel/error/accel_error_rpc.o 00:04:00.971 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.971 CC module/accel/iaa/accel_iaa_rpc.o 00:04:00.971 SYMLINK libspdk_env_dpdk_rpc.so 00:04:00.971 CC module/accel/dsa/accel_dsa_rpc.o 00:04:00.971 SYMLINK libspdk_scheduler_gscheduler.so 00:04:00.971 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:01.229 LIB libspdk_scheduler_dynamic.a 00:04:01.229 SO libspdk_scheduler_dynamic.so.3.0 00:04:01.229 LIB libspdk_accel_error.a 00:04:01.229 LIB libspdk_accel_ioat.a 00:04:01.229 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.229 LIB libspdk_accel_iaa.a 00:04:01.229 LIB libspdk_blob_bdev.a 00:04:01.229 SO libspdk_accel_error.so.1.0 00:04:01.229 SO libspdk_accel_ioat.so.5.0 00:04:01.229 LIB libspdk_accel_dsa.a 00:04:01.229 SO libspdk_accel_iaa.so.2.0 00:04:01.229 SO libspdk_blob_bdev.so.10.1 00:04:01.229 SO libspdk_accel_dsa.so.4.0 00:04:01.229 SYMLINK libspdk_accel_error.so 00:04:01.229 SYMLINK libspdk_accel_ioat.so 00:04:01.229 SYMLINK libspdk_blob_bdev.so 00:04:01.229 SYMLINK libspdk_accel_iaa.so 00:04:01.229 SYMLINK libspdk_accel_dsa.so 00:04:01.486 CC module/bdev/malloc/bdev_malloc.o 00:04:01.486 CC module/bdev/gpt/gpt.o 00:04:01.486 CC module/bdev/null/bdev_null.o 00:04:01.486 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.486 CC module/bdev/passthru/vbdev_passthru.o 00:04:01.486 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.486 CC module/bdev/error/vbdev_error.o 00:04:01.486 CC module/bdev/delay/vbdev_delay.o 00:04:01.486 CC module/bdev/nvme/bdev_nvme.o 00:04:01.486 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.486 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.486 CC module/bdev/null/bdev_null_rpc.o 00:04:01.486 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:01.745 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.745 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:01.745 LIB libspdk_sock_posix.a 00:04:01.745 LIB libspdk_blobfs_bdev.a 00:04:01.745 SO libspdk_sock_posix.so.5.0 00:04:01.745 LIB libspdk_bdev_passthru.a 00:04:01.745 LIB libspdk_bdev_null.a 00:04:01.745 SO libspdk_blobfs_bdev.so.5.0 00:04:01.745 SO libspdk_bdev_passthru.so.5.0 00:04:01.745 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.745 SO libspdk_bdev_null.so.5.0 00:04:01.745 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.745 LIB libspdk_bdev_malloc.a 00:04:01.745 LIB libspdk_bdev_gpt.a 00:04:01.745 SYMLINK libspdk_sock_posix.so 00:04:01.745 SYMLINK libspdk_bdev_passthru.so 00:04:01.745 SYMLINK libspdk_blobfs_bdev.so 00:04:01.745 SO libspdk_bdev_malloc.so.5.0 00:04:01.745 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:01.745 CC module/bdev/nvme/nvme_rpc.o 00:04:01.745 CC module/bdev/nvme/bdev_mdns_client.o 00:04:01.745 LIB libspdk_bdev_error.a 00:04:01.745 SYMLINK libspdk_bdev_null.so 00:04:01.745 SO libspdk_bdev_gpt.so.5.0 00:04:01.745 CC module/bdev/nvme/vbdev_opal.o 00:04:01.745 SO libspdk_bdev_error.so.5.0 00:04:01.745 SYMLINK libspdk_bdev_malloc.so 00:04:01.745 SYMLINK libspdk_bdev_gpt.so 00:04:01.745 SYMLINK libspdk_bdev_error.so 00:04:01.745 LIB libspdk_bdev_delay.a 00:04:01.745 SO libspdk_bdev_delay.so.5.0 00:04:02.004 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.004 CC module/bdev/raid/bdev_raid.o 00:04:02.004 SYMLINK libspdk_bdev_delay.so 00:04:02.004 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.004 CC module/bdev/split/vbdev_split.o 00:04:02.004 LIB libspdk_bdev_lvol.a 00:04:02.004 CC module/bdev/xnvme/bdev_xnvme.o 00:04:02.004 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.004 SO libspdk_bdev_lvol.so.5.0 00:04:02.004 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:02.004 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.004 SYMLINK libspdk_bdev_lvol.so 00:04:02.004 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.004 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.004 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.262 LIB libspdk_bdev_split.a 00:04:02.262 CC module/bdev/raid/raid0.o 00:04:02.262 LIB libspdk_bdev_zone_block.a 00:04:02.262 LIB libspdk_bdev_xnvme.a 00:04:02.262 SO libspdk_bdev_split.so.5.0 00:04:02.262 SO libspdk_bdev_zone_block.so.5.0 00:04:02.262 CC module/bdev/aio/bdev_aio.o 00:04:02.262 SO libspdk_bdev_xnvme.so.2.0 00:04:02.262 CC module/bdev/raid/raid1.o 00:04:02.262 CC module/bdev/raid/concat.o 00:04:02.262 SYMLINK libspdk_bdev_split.so 00:04:02.262 SYMLINK libspdk_bdev_zone_block.so 00:04:02.262 CC module/bdev/ftl/bdev_ftl.o 00:04:02.262 SYMLINK libspdk_bdev_xnvme.so 00:04:02.262 CC module/bdev/aio/bdev_aio_rpc.o 00:04:02.262 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.262 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:02.521 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:02.521 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:02.521 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.521 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:02.521 LIB libspdk_bdev_aio.a 00:04:02.521 SO libspdk_bdev_aio.so.5.0 00:04:02.521 SYMLINK libspdk_bdev_aio.so 00:04:02.521 LIB libspdk_bdev_raid.a 00:04:02.521 SO libspdk_bdev_raid.so.5.0 00:04:02.521 LIB libspdk_bdev_ftl.a 00:04:02.521 LIB libspdk_bdev_iscsi.a 00:04:02.521 SO libspdk_bdev_ftl.so.5.0 00:04:02.779 SO libspdk_bdev_iscsi.so.5.0 00:04:02.779 SYMLINK libspdk_bdev_raid.so 00:04:02.779 SYMLINK libspdk_bdev_ftl.so 00:04:02.779 SYMLINK libspdk_bdev_iscsi.so 00:04:02.779 LIB libspdk_bdev_virtio.a 00:04:02.779 SO libspdk_bdev_virtio.so.5.0 00:04:02.779 SYMLINK libspdk_bdev_virtio.so 00:04:03.741 LIB libspdk_bdev_nvme.a 00:04:03.741 SO libspdk_bdev_nvme.so.6.0 00:04:03.741 SYMLINK libspdk_bdev_nvme.so 00:04:04.000 CC module/event/subsystems/scheduler/scheduler.o 00:04:04.000 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:04.000 CC module/event/subsystems/iobuf/iobuf.o 00:04:04.000 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:04.000 CC module/event/subsystems/vmd/vmd.o 00:04:04.000 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:04.000 CC module/event/subsystems/sock/sock.o 00:04:04.000 LIB libspdk_event_vhost_blk.a 00:04:04.000 LIB libspdk_event_scheduler.a 00:04:04.000 SO libspdk_event_vhost_blk.so.2.0 00:04:04.000 LIB libspdk_event_sock.a 00:04:04.000 LIB libspdk_event_iobuf.a 00:04:04.000 SO libspdk_event_scheduler.so.3.0 00:04:04.000 LIB libspdk_event_vmd.a 00:04:04.000 SO libspdk_event_sock.so.4.0 00:04:04.000 SO libspdk_event_iobuf.so.2.0 00:04:04.000 SO libspdk_event_vmd.so.5.0 00:04:04.000 SYMLINK libspdk_event_vhost_blk.so 00:04:04.000 SYMLINK libspdk_event_scheduler.so 00:04:04.000 SYMLINK libspdk_event_sock.so 00:04:04.000 SYMLINK libspdk_event_vmd.so 00:04:04.000 SYMLINK libspdk_event_iobuf.so 00:04:04.259 CC module/event/subsystems/accel/accel.o 00:04:04.259 LIB libspdk_event_accel.a 00:04:04.259 SO libspdk_event_accel.so.5.0 00:04:04.518 SYMLINK libspdk_event_accel.so 00:04:04.518 CC module/event/subsystems/bdev/bdev.o 00:04:04.778 LIB libspdk_event_bdev.a 00:04:04.778 SO libspdk_event_bdev.so.5.0 00:04:04.778 SYMLINK libspdk_event_bdev.so 00:04:04.778 CC module/event/subsystems/nbd/nbd.o 00:04:04.778 CC module/event/subsystems/ublk/ublk.o 00:04:04.778 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:04.778 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:04.778 CC module/event/subsystems/scsi/scsi.o 00:04:05.036 LIB libspdk_event_ublk.a 00:04:05.036 SO libspdk_event_ublk.so.2.0 00:04:05.036 LIB libspdk_event_nbd.a 00:04:05.036 LIB libspdk_event_scsi.a 00:04:05.036 SO libspdk_event_nbd.so.5.0 00:04:05.036 SYMLINK libspdk_event_ublk.so 00:04:05.036 SO libspdk_event_scsi.so.5.0 00:04:05.036 SYMLINK libspdk_event_nbd.so 00:04:05.036 LIB libspdk_event_nvmf.a 00:04:05.036 SYMLINK libspdk_event_scsi.so 00:04:05.036 SO libspdk_event_nvmf.so.5.0 00:04:05.294 SYMLINK libspdk_event_nvmf.so 00:04:05.294 CC module/event/subsystems/iscsi/iscsi.o 00:04:05.294 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:05.294 LIB libspdk_event_iscsi.a 00:04:05.294 LIB libspdk_event_vhost_scsi.a 00:04:05.294 SO libspdk_event_vhost_scsi.so.2.0 00:04:05.294 SO libspdk_event_iscsi.so.5.0 00:04:05.294 SYMLINK libspdk_event_vhost_scsi.so 00:04:05.294 SYMLINK libspdk_event_iscsi.so 00:04:05.553 SO libspdk.so.5.0 00:04:05.553 SYMLINK libspdk.so 00:04:05.553 CXX app/trace/trace.o 00:04:05.553 TEST_HEADER include/spdk/accel.h 00:04:05.553 TEST_HEADER include/spdk/accel_module.h 00:04:05.553 TEST_HEADER include/spdk/assert.h 00:04:05.553 TEST_HEADER include/spdk/barrier.h 00:04:05.553 TEST_HEADER include/spdk/base64.h 00:04:05.553 TEST_HEADER include/spdk/bdev.h 00:04:05.553 TEST_HEADER include/spdk/bdev_module.h 00:04:05.553 TEST_HEADER include/spdk/bdev_zone.h 00:04:05.553 TEST_HEADER include/spdk/bit_array.h 00:04:05.553 TEST_HEADER include/spdk/bit_pool.h 00:04:05.553 TEST_HEADER include/spdk/blob_bdev.h 00:04:05.553 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:05.553 TEST_HEADER include/spdk/blobfs.h 00:04:05.553 TEST_HEADER include/spdk/blob.h 00:04:05.553 TEST_HEADER include/spdk/conf.h 00:04:05.553 TEST_HEADER include/spdk/config.h 00:04:05.553 TEST_HEADER include/spdk/cpuset.h 00:04:05.553 TEST_HEADER include/spdk/crc16.h 00:04:05.553 TEST_HEADER include/spdk/crc32.h 00:04:05.553 TEST_HEADER include/spdk/crc64.h 00:04:05.553 TEST_HEADER include/spdk/dif.h 00:04:05.553 CC test/event/event_perf/event_perf.o 00:04:05.553 TEST_HEADER include/spdk/dma.h 00:04:05.553 TEST_HEADER include/spdk/endian.h 00:04:05.553 TEST_HEADER include/spdk/env_dpdk.h 00:04:05.553 TEST_HEADER include/spdk/env.h 00:04:05.553 TEST_HEADER include/spdk/event.h 00:04:05.553 TEST_HEADER include/spdk/fd_group.h 00:04:05.553 TEST_HEADER include/spdk/fd.h 00:04:05.553 TEST_HEADER include/spdk/file.h 00:04:05.553 TEST_HEADER include/spdk/ftl.h 00:04:05.553 TEST_HEADER include/spdk/gpt_spec.h 00:04:05.553 CC examples/accel/perf/accel_perf.o 00:04:05.553 TEST_HEADER include/spdk/hexlify.h 00:04:05.553 TEST_HEADER include/spdk/histogram_data.h 00:04:05.553 CC test/accel/dif/dif.o 00:04:05.553 TEST_HEADER include/spdk/idxd.h 00:04:05.553 TEST_HEADER include/spdk/idxd_spec.h 00:04:05.553 TEST_HEADER include/spdk/init.h 00:04:05.553 TEST_HEADER include/spdk/ioat.h 00:04:05.553 TEST_HEADER include/spdk/ioat_spec.h 00:04:05.553 TEST_HEADER include/spdk/iscsi_spec.h 00:04:05.553 TEST_HEADER include/spdk/json.h 00:04:05.553 TEST_HEADER include/spdk/jsonrpc.h 00:04:05.553 TEST_HEADER include/spdk/likely.h 00:04:05.553 TEST_HEADER include/spdk/log.h 00:04:05.553 TEST_HEADER include/spdk/lvol.h 00:04:05.553 TEST_HEADER include/spdk/memory.h 00:04:05.553 TEST_HEADER include/spdk/mmio.h 00:04:05.553 TEST_HEADER include/spdk/nbd.h 00:04:05.553 TEST_HEADER include/spdk/notify.h 00:04:05.553 TEST_HEADER include/spdk/nvme.h 00:04:05.553 TEST_HEADER include/spdk/nvme_intel.h 00:04:05.812 CC test/blobfs/mkfs/mkfs.o 00:04:05.812 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:05.812 CC test/env/mem_callbacks/mem_callbacks.o 00:04:05.812 CC test/app/bdev_svc/bdev_svc.o 00:04:05.812 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:05.812 TEST_HEADER include/spdk/nvme_spec.h 00:04:05.812 TEST_HEADER include/spdk/nvme_zns.h 00:04:05.812 CC test/dma/test_dma/test_dma.o 00:04:05.812 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:05.812 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:05.812 CC test/bdev/bdevio/bdevio.o 00:04:05.812 TEST_HEADER include/spdk/nvmf.h 00:04:05.812 TEST_HEADER include/spdk/nvmf_spec.h 00:04:05.812 TEST_HEADER include/spdk/nvmf_transport.h 00:04:05.812 TEST_HEADER include/spdk/opal.h 00:04:05.812 TEST_HEADER include/spdk/opal_spec.h 00:04:05.812 TEST_HEADER include/spdk/pci_ids.h 00:04:05.812 TEST_HEADER include/spdk/pipe.h 00:04:05.812 TEST_HEADER include/spdk/queue.h 00:04:05.812 TEST_HEADER include/spdk/reduce.h 00:04:05.812 TEST_HEADER include/spdk/rpc.h 00:04:05.812 TEST_HEADER include/spdk/scheduler.h 00:04:05.812 TEST_HEADER include/spdk/scsi.h 00:04:05.812 TEST_HEADER include/spdk/scsi_spec.h 00:04:05.812 TEST_HEADER include/spdk/sock.h 00:04:05.812 TEST_HEADER include/spdk/stdinc.h 00:04:05.812 TEST_HEADER include/spdk/string.h 00:04:05.812 TEST_HEADER include/spdk/thread.h 00:04:05.812 TEST_HEADER include/spdk/trace.h 00:04:05.812 TEST_HEADER include/spdk/trace_parser.h 00:04:05.812 TEST_HEADER include/spdk/tree.h 00:04:05.812 TEST_HEADER include/spdk/ublk.h 00:04:05.812 TEST_HEADER include/spdk/util.h 00:04:05.812 TEST_HEADER include/spdk/uuid.h 00:04:05.812 TEST_HEADER include/spdk/version.h 00:04:05.812 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:05.812 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:05.812 TEST_HEADER include/spdk/vhost.h 00:04:05.812 TEST_HEADER include/spdk/vmd.h 00:04:05.812 TEST_HEADER include/spdk/xor.h 00:04:05.812 TEST_HEADER include/spdk/zipf.h 00:04:05.812 CXX test/cpp_headers/accel.o 00:04:05.812 LINK event_perf 00:04:05.812 LINK bdev_svc 00:04:05.812 LINK mkfs 00:04:05.812 LINK spdk_trace 00:04:05.812 CXX test/cpp_headers/accel_module.o 00:04:06.070 LINK dif 00:04:06.070 CC test/event/reactor/reactor.o 00:04:06.070 CXX test/cpp_headers/assert.o 00:04:06.070 LINK accel_perf 00:04:06.070 LINK bdevio 00:04:06.070 LINK reactor 00:04:06.070 LINK test_dma 00:04:06.070 CC app/trace_record/trace_record.o 00:04:06.070 CC test/app/histogram_perf/histogram_perf.o 00:04:06.070 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:06.070 CXX test/cpp_headers/barrier.o 00:04:06.070 LINK mem_callbacks 00:04:06.070 CC app/nvmf_tgt/nvmf_main.o 00:04:06.329 CXX test/cpp_headers/base64.o 00:04:06.329 LINK histogram_perf 00:04:06.329 CC test/event/reactor_perf/reactor_perf.o 00:04:06.329 CXX test/cpp_headers/bdev.o 00:04:06.329 CC examples/bdev/hello_world/hello_bdev.o 00:04:06.329 CC test/env/vtophys/vtophys.o 00:04:06.329 CXX test/cpp_headers/bdev_module.o 00:04:06.329 LINK nvmf_tgt 00:04:06.329 LINK spdk_trace_record 00:04:06.329 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.329 LINK reactor_perf 00:04:06.329 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:06.329 LINK vtophys 00:04:06.329 CC test/env/memory/memory_ut.o 00:04:06.588 CC test/env/pci/pci_ut.o 00:04:06.588 CXX test/cpp_headers/bdev_zone.o 00:04:06.588 LINK hello_bdev 00:04:06.588 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:06.588 LINK nvme_fuzz 00:04:06.588 LINK iscsi_tgt 00:04:06.588 CC test/event/app_repeat/app_repeat.o 00:04:06.588 LINK env_dpdk_post_init 00:04:06.588 CXX test/cpp_headers/bit_array.o 00:04:06.588 CC examples/bdev/bdevperf/bdevperf.o 00:04:06.588 CXX test/cpp_headers/bit_pool.o 00:04:06.588 CXX test/cpp_headers/blob_bdev.o 00:04:06.588 LINK app_repeat 00:04:06.846 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.846 CC app/spdk_lspci/spdk_lspci.o 00:04:06.846 CC app/spdk_tgt/spdk_tgt.o 00:04:06.846 CC app/spdk_nvme_perf/perf.o 00:04:06.846 CXX test/cpp_headers/blobfs_bdev.o 00:04:06.846 LINK spdk_lspci 00:04:06.846 CC test/event/scheduler/scheduler.o 00:04:06.846 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:06.846 LINK pci_ut 00:04:06.846 CXX test/cpp_headers/blobfs.o 00:04:07.104 LINK spdk_tgt 00:04:07.104 LINK scheduler 00:04:07.104 LINK memory_ut 00:04:07.104 CXX test/cpp_headers/blob.o 00:04:07.104 CC examples/blob/hello_world/hello_blob.o 00:04:07.104 CC examples/blob/cli/blobcli.o 00:04:07.104 CC test/app/jsoncat/jsoncat.o 00:04:07.104 CXX test/cpp_headers/conf.o 00:04:07.362 LINK vhost_fuzz 00:04:07.362 LINK jsoncat 00:04:07.362 CC examples/ioat/perf/perf.o 00:04:07.362 CC test/lvol/esnap/esnap.o 00:04:07.362 LINK hello_blob 00:04:07.362 CXX test/cpp_headers/config.o 00:04:07.362 CXX test/cpp_headers/cpuset.o 00:04:07.362 LINK bdevperf 00:04:07.362 CC examples/nvme/hello_world/hello_world.o 00:04:07.362 CXX test/cpp_headers/crc16.o 00:04:07.362 LINK ioat_perf 00:04:07.362 CC examples/sock/hello_world/hello_sock.o 00:04:07.621 CC test/nvme/aer/aer.o 00:04:07.621 CXX test/cpp_headers/crc32.o 00:04:07.621 LINK spdk_nvme_perf 00:04:07.621 CC test/rpc_client/rpc_client_test.o 00:04:07.621 LINK blobcli 00:04:07.621 CC examples/ioat/verify/verify.o 00:04:07.621 CXX test/cpp_headers/crc64.o 00:04:07.621 LINK hello_world 00:04:07.621 LINK hello_sock 00:04:07.879 CC app/spdk_nvme_identify/identify.o 00:04:07.879 LINK rpc_client_test 00:04:07.879 LINK verify 00:04:07.879 LINK aer 00:04:07.879 CXX test/cpp_headers/dif.o 00:04:07.879 CC examples/nvme/reconnect/reconnect.o 00:04:07.879 CC test/nvme/reset/reset.o 00:04:07.879 CC test/thread/poller_perf/poller_perf.o 00:04:07.879 CXX test/cpp_headers/dma.o 00:04:07.879 CC test/nvme/sgl/sgl.o 00:04:07.879 CC test/nvme/e2edp/nvme_dp.o 00:04:07.879 CC test/app/stub/stub.o 00:04:07.879 LINK poller_perf 00:04:07.879 LINK reset 00:04:08.141 CXX test/cpp_headers/endian.o 00:04:08.141 LINK stub 00:04:08.141 LINK reconnect 00:04:08.141 CC test/nvme/err_injection/err_injection.o 00:04:08.141 CC test/nvme/overhead/overhead.o 00:04:08.141 CXX test/cpp_headers/env_dpdk.o 00:04:08.141 LINK iscsi_fuzz 00:04:08.141 LINK sgl 00:04:08.141 LINK nvme_dp 00:04:08.400 LINK err_injection 00:04:08.400 CC test/nvme/startup/startup.o 00:04:08.400 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.400 CXX test/cpp_headers/env.o 00:04:08.400 LINK spdk_nvme_identify 00:04:08.400 CC test/nvme/simple_copy/simple_copy.o 00:04:08.400 CC test/nvme/reserve/reserve.o 00:04:08.400 LINK overhead 00:04:08.400 CC test/nvme/connect_stress/connect_stress.o 00:04:08.400 CC test/nvme/boot_partition/boot_partition.o 00:04:08.400 CXX test/cpp_headers/event.o 00:04:08.400 LINK startup 00:04:08.659 LINK simple_copy 00:04:08.659 CXX test/cpp_headers/fd_group.o 00:04:08.659 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.659 LINK boot_partition 00:04:08.659 LINK reserve 00:04:08.659 CC test/nvme/compliance/nvme_compliance.o 00:04:08.659 LINK connect_stress 00:04:08.659 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.659 CXX test/cpp_headers/fd.o 00:04:08.659 CXX test/cpp_headers/file.o 00:04:08.659 LINK nvme_manage 00:04:08.659 CXX test/cpp_headers/ftl.o 00:04:08.659 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.659 CXX test/cpp_headers/gpt_spec.o 00:04:08.659 LINK spdk_nvme_discover 00:04:08.917 LINK fused_ordering 00:04:08.917 CC examples/nvme/arbitration/arbitration.o 00:04:08.917 CC examples/nvme/hotplug/hotplug.o 00:04:08.917 CXX test/cpp_headers/hexlify.o 00:04:08.917 CC examples/vmd/lsvmd/lsvmd.o 00:04:08.917 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.917 LINK doorbell_aers 00:04:08.917 LINK nvme_compliance 00:04:08.917 CC app/spdk_top/spdk_top.o 00:04:08.917 LINK lsvmd 00:04:08.917 CC examples/nvme/abort/abort.o 00:04:08.917 CXX test/cpp_headers/histogram_data.o 00:04:08.917 LINK cmb_copy 00:04:09.176 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:09.176 CC test/nvme/fdp/fdp.o 00:04:09.176 LINK hotplug 00:04:09.176 CXX test/cpp_headers/idxd.o 00:04:09.176 CC examples/vmd/led/led.o 00:04:09.176 LINK arbitration 00:04:09.176 LINK pmr_persistence 00:04:09.176 CC test/nvme/cuse/cuse.o 00:04:09.176 CXX test/cpp_headers/idxd_spec.o 00:04:09.176 LINK abort 00:04:09.176 LINK led 00:04:09.176 CC app/vhost/vhost.o 00:04:09.176 CXX test/cpp_headers/init.o 00:04:09.176 CXX test/cpp_headers/ioat.o 00:04:09.435 LINK fdp 00:04:09.435 CXX test/cpp_headers/ioat_spec.o 00:04:09.435 LINK vhost 00:04:09.435 CC examples/nvmf/nvmf/nvmf.o 00:04:09.435 CC app/spdk_dd/spdk_dd.o 00:04:09.435 CC examples/util/zipf/zipf.o 00:04:09.435 CC app/fio/nvme/fio_plugin.o 00:04:09.435 CXX test/cpp_headers/iscsi_spec.o 00:04:09.435 CC app/fio/bdev/fio_plugin.o 00:04:09.435 CXX test/cpp_headers/json.o 00:04:09.694 LINK spdk_top 00:04:09.694 LINK zipf 00:04:09.694 CXX test/cpp_headers/jsonrpc.o 00:04:09.694 LINK nvmf 00:04:09.694 CXX test/cpp_headers/likely.o 00:04:09.694 CXX test/cpp_headers/log.o 00:04:09.694 LINK spdk_dd 00:04:09.694 CC examples/thread/thread/thread_ex.o 00:04:09.694 CXX test/cpp_headers/lvol.o 00:04:09.952 CXX test/cpp_headers/memory.o 00:04:09.952 CC examples/idxd/perf/perf.o 00:04:09.952 CXX test/cpp_headers/mmio.o 00:04:09.952 LINK spdk_bdev 00:04:09.952 LINK spdk_nvme 00:04:09.952 CXX test/cpp_headers/nbd.o 00:04:09.952 CXX test/cpp_headers/notify.o 00:04:09.952 CXX test/cpp_headers/nvme.o 00:04:09.952 LINK cuse 00:04:09.952 CXX test/cpp_headers/nvme_intel.o 00:04:09.952 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.952 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.952 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.952 LINK thread 00:04:09.952 CXX test/cpp_headers/nvme_spec.o 00:04:09.952 CXX test/cpp_headers/nvme_zns.o 00:04:10.211 CXX test/cpp_headers/nvmf_cmd.o 00:04:10.211 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:10.211 CXX test/cpp_headers/nvmf.o 00:04:10.211 CXX test/cpp_headers/nvmf_spec.o 00:04:10.211 LINK idxd_perf 00:04:10.211 LINK interrupt_tgt 00:04:10.211 CXX test/cpp_headers/nvmf_transport.o 00:04:10.211 CXX test/cpp_headers/opal.o 00:04:10.211 CXX test/cpp_headers/opal_spec.o 00:04:10.211 CXX test/cpp_headers/pci_ids.o 00:04:10.211 CXX test/cpp_headers/pipe.o 00:04:10.211 CXX test/cpp_headers/queue.o 00:04:10.211 CXX test/cpp_headers/reduce.o 00:04:10.211 CXX test/cpp_headers/rpc.o 00:04:10.211 CXX test/cpp_headers/scheduler.o 00:04:10.211 CXX test/cpp_headers/scsi.o 00:04:10.211 CXX test/cpp_headers/scsi_spec.o 00:04:10.211 CXX test/cpp_headers/sock.o 00:04:10.211 CXX test/cpp_headers/stdinc.o 00:04:10.211 CXX test/cpp_headers/string.o 00:04:10.469 CXX test/cpp_headers/thread.o 00:04:10.469 CXX test/cpp_headers/trace.o 00:04:10.469 CXX test/cpp_headers/trace_parser.o 00:04:10.469 CXX test/cpp_headers/tree.o 00:04:10.469 CXX test/cpp_headers/ublk.o 00:04:10.469 CXX test/cpp_headers/util.o 00:04:10.469 CXX test/cpp_headers/uuid.o 00:04:10.469 CXX test/cpp_headers/version.o 00:04:10.469 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.469 CXX test/cpp_headers/vhost.o 00:04:10.469 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.469 CXX test/cpp_headers/vmd.o 00:04:10.469 CXX test/cpp_headers/xor.o 00:04:10.469 CXX test/cpp_headers/zipf.o 00:04:11.036 LINK esnap 00:04:11.298 ************************************ 00:04:11.298 END TEST make 00:04:11.298 ************************************ 00:04:11.298 00:04:11.298 real 0m40.016s 00:04:11.298 user 3m52.178s 00:04:11.298 sys 0m42.700s 00:04:11.298 23:58:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:11.298 23:58:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:11.298 23:58:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.298 23:58:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.298 23:58:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.556 23:58:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.556 23:58:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.556 23:58:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.556 23:58:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.556 23:58:26 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.557 23:58:26 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.557 23:58:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.557 23:58:26 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.557 23:58:26 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.557 23:58:26 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.557 23:58:26 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.557 23:58:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.557 23:58:26 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.557 23:58:26 -- scripts/common.sh@344 -- # : 1 00:04:11.557 23:58:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.557 23:58:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.557 23:58:26 -- scripts/common.sh@364 -- # decimal 1 00:04:11.557 23:58:26 -- scripts/common.sh@352 -- # local d=1 00:04:11.557 23:58:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.557 23:58:26 -- scripts/common.sh@354 -- # echo 1 00:04:11.557 23:58:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.557 23:58:26 -- scripts/common.sh@365 -- # decimal 2 00:04:11.557 23:58:26 -- scripts/common.sh@352 -- # local d=2 00:04:11.557 23:58:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.557 23:58:26 -- scripts/common.sh@354 -- # echo 2 00:04:11.557 23:58:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.557 23:58:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.557 23:58:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.557 23:58:26 -- scripts/common.sh@367 -- # return 0 00:04:11.557 23:58:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.557 23:58:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.557 --rc genhtml_branch_coverage=1 00:04:11.557 --rc genhtml_function_coverage=1 00:04:11.557 --rc genhtml_legend=1 00:04:11.557 --rc geninfo_all_blocks=1 00:04:11.557 --rc geninfo_unexecuted_blocks=1 00:04:11.557 00:04:11.557 ' 00:04:11.557 23:58:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.557 --rc genhtml_branch_coverage=1 00:04:11.557 --rc genhtml_function_coverage=1 00:04:11.557 --rc genhtml_legend=1 00:04:11.557 --rc geninfo_all_blocks=1 00:04:11.557 --rc geninfo_unexecuted_blocks=1 00:04:11.557 00:04:11.557 ' 00:04:11.557 23:58:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.557 --rc genhtml_branch_coverage=1 00:04:11.557 --rc genhtml_function_coverage=1 00:04:11.557 --rc genhtml_legend=1 00:04:11.557 --rc geninfo_all_blocks=1 00:04:11.557 --rc geninfo_unexecuted_blocks=1 00:04:11.557 00:04:11.557 ' 00:04:11.557 23:58:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.557 --rc genhtml_branch_coverage=1 00:04:11.557 --rc genhtml_function_coverage=1 00:04:11.557 --rc genhtml_legend=1 00:04:11.557 --rc geninfo_all_blocks=1 00:04:11.557 --rc geninfo_unexecuted_blocks=1 00:04:11.557 00:04:11.557 ' 00:04:11.557 23:58:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:11.557 23:58:26 -- nvmf/common.sh@7 -- # uname -s 00:04:11.557 23:58:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.557 23:58:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.557 23:58:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.557 23:58:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.557 23:58:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.557 23:58:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.557 23:58:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.557 23:58:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.557 23:58:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.557 23:58:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.557 23:58:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:04:11.557 23:58:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:04:11.557 23:58:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.557 23:58:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.557 23:58:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.557 23:58:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:11.557 23:58:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.557 23:58:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.557 23:58:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.557 23:58:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.557 23:58:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.557 23:58:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.557 23:58:26 -- paths/export.sh@5 -- # export PATH 00:04:11.557 23:58:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.557 23:58:26 -- nvmf/common.sh@46 -- # : 0 00:04:11.557 23:58:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:11.557 23:58:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:11.557 23:58:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:11.557 23:58:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.557 23:58:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.557 23:58:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:11.557 23:58:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:11.557 23:58:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:11.557 23:58:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.557 23:58:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.557 23:58:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.557 23:58:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:11.557 23:58:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.557 23:58:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.557 23:58:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.557 23:58:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:11.557 23:58:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:11.557 23:58:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:11.557 23:58:26 -- spdk/autotest.sh@48 -- # udevadm_pid=60591 00:04:11.557 23:58:26 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.557 23:58:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:11.557 23:58:26 -- spdk/autotest.sh@54 -- # echo 60603 00:04:11.557 23:58:26 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.557 23:58:26 -- spdk/autotest.sh@56 -- # echo 60621 00:04:11.557 23:58:26 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:11.557 23:58:26 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:11.557 23:58:26 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:11.558 23:58:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.558 23:58:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.558 23:58:26 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.558 23:58:26 -- spdk/autotest.sh@70 -- # create_test_list 00:04:11.558 23:58:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:11.558 23:58:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.558 23:58:26 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:11.558 23:58:26 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:11.558 23:58:26 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:11.558 23:58:26 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:11.558 23:58:26 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:11.558 23:58:26 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:11.558 23:58:26 -- common/autotest_common.sh@1450 -- # uname 00:04:11.558 23:58:26 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:11.558 23:58:26 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:11.558 23:58:26 -- common/autotest_common.sh@1470 -- # uname 00:04:11.558 23:58:26 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:11.558 23:58:26 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:11.558 23:58:26 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:11.856 lcov: LCOV version 1.15 00:04:11.856 23:58:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:18.431 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:18.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:18.431 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:18.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:18.431 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:18.431 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:40.375 23:58:52 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:40.375 23:58:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.375 23:58:52 -- common/autotest_common.sh@10 -- # set +x 00:04:40.375 23:58:52 -- spdk/autotest.sh@89 -- # rm -f 00:04:40.375 23:58:52 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.375 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:40.375 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:40.375 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:40.375 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:40.375 23:58:53 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:40.375 23:58:53 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:40.375 23:58:53 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:40.375 23:58:53 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:40.375 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:40.375 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.375 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.375 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:40.375 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:40.376 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:40.376 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.376 23:58:53 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:40.376 23:58:53 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:40.376 23:58:53 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:40.376 23:58:53 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:40.376 23:58:53 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # grep -v p 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:53 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:40.376 23:58:53 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:40.376 23:58:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:53 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:53 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062097 s, 169 MB/s 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:53 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:40.376 23:58:53 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:40.376 23:58:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:53 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:53 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233913 s, 44.8 MB/s 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:53 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:40.376 23:58:53 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:40.376 23:58:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:53 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:53 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481269 s, 218 MB/s 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:53 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:04:40.376 23:58:53 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:40.376 23:58:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:40.376 23:58:53 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:53 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:53 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652125 s, 161 MB/s 00:04:40.376 23:58:53 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:53 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:53 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:04:40.376 23:58:53 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:40.376 23:58:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:40.376 23:58:54 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:54 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:54 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625468 s, 168 MB/s 00:04:40.376 23:58:54 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:40.376 23:58:54 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:40.376 23:58:54 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:40.376 23:58:54 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:40.376 23:58:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:40.376 No valid GPT data, bailing 00:04:40.376 23:58:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:40.376 23:58:54 -- scripts/common.sh@393 -- # pt= 00:04:40.376 23:58:54 -- scripts/common.sh@394 -- # return 1 00:04:40.376 23:58:54 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:40.376 1+0 records in 00:04:40.376 1+0 records out 00:04:40.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563319 s, 186 MB/s 00:04:40.376 23:58:54 -- spdk/autotest.sh@116 -- # sync 00:04:40.376 23:58:54 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.376 23:58:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.376 23:58:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.772 23:58:56 -- spdk/autotest.sh@122 -- # uname -s 00:04:41.772 23:58:56 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:41.772 23:58:56 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.772 23:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.772 23:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.772 23:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.772 ************************************ 00:04:41.772 START TEST setup.sh 00:04:41.772 ************************************ 00:04:41.772 23:58:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:41.772 * Looking for test storage... 00:04:41.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.772 23:58:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.772 23:58:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.772 23:58:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.035 23:58:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.035 23:58:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.035 23:58:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.035 23:58:56 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.035 23:58:56 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.035 23:58:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.035 23:58:56 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.035 23:58:56 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.035 23:58:56 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.035 23:58:56 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.035 23:58:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.035 23:58:56 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.035 23:58:56 -- scripts/common.sh@344 -- # : 1 00:04:42.035 23:58:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.035 23:58:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.035 23:58:56 -- scripts/common.sh@364 -- # decimal 1 00:04:42.035 23:58:56 -- scripts/common.sh@352 -- # local d=1 00:04:42.035 23:58:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.035 23:58:56 -- scripts/common.sh@354 -- # echo 1 00:04:42.035 23:58:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.035 23:58:56 -- scripts/common.sh@365 -- # decimal 2 00:04:42.035 23:58:56 -- scripts/common.sh@352 -- # local d=2 00:04:42.035 23:58:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.035 23:58:56 -- scripts/common.sh@354 -- # echo 2 00:04:42.035 23:58:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.035 23:58:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.035 23:58:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.035 23:58:56 -- scripts/common.sh@367 -- # return 0 00:04:42.035 23:58:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.035 --rc genhtml_branch_coverage=1 00:04:42.035 --rc genhtml_function_coverage=1 00:04:42.035 --rc genhtml_legend=1 00:04:42.035 --rc geninfo_all_blocks=1 00:04:42.035 --rc geninfo_unexecuted_blocks=1 00:04:42.035 00:04:42.035 ' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.035 --rc genhtml_branch_coverage=1 00:04:42.035 --rc genhtml_function_coverage=1 00:04:42.035 --rc genhtml_legend=1 00:04:42.035 --rc geninfo_all_blocks=1 00:04:42.035 --rc geninfo_unexecuted_blocks=1 00:04:42.035 00:04:42.035 ' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.035 --rc genhtml_branch_coverage=1 00:04:42.035 --rc genhtml_function_coverage=1 00:04:42.035 --rc genhtml_legend=1 00:04:42.035 --rc geninfo_all_blocks=1 00:04:42.035 --rc geninfo_unexecuted_blocks=1 00:04:42.035 00:04:42.035 ' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.035 --rc genhtml_branch_coverage=1 00:04:42.035 --rc genhtml_function_coverage=1 00:04:42.035 --rc genhtml_legend=1 00:04:42.035 --rc geninfo_all_blocks=1 00:04:42.035 --rc geninfo_unexecuted_blocks=1 00:04:42.035 00:04:42.035 ' 00:04:42.035 23:58:56 -- setup/test-setup.sh@10 -- # uname -s 00:04:42.035 23:58:56 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:42.035 23:58:56 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:42.035 23:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.035 23:58:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.035 ************************************ 00:04:42.035 START TEST acl 00:04:42.035 ************************************ 00:04:42.035 23:58:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:42.035 * Looking for test storage... 00:04:42.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.035 23:58:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.035 23:58:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.035 23:58:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.035 23:58:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.035 23:58:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.035 23:58:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.035 23:58:56 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.035 23:58:56 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.035 23:58:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.035 23:58:56 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.035 23:58:56 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.035 23:58:56 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.035 23:58:56 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.035 23:58:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.035 23:58:56 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.035 23:58:56 -- scripts/common.sh@344 -- # : 1 00:04:42.035 23:58:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.035 23:58:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.035 23:58:56 -- scripts/common.sh@364 -- # decimal 1 00:04:42.035 23:58:56 -- scripts/common.sh@352 -- # local d=1 00:04:42.035 23:58:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.035 23:58:56 -- scripts/common.sh@354 -- # echo 1 00:04:42.035 23:58:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.035 23:58:56 -- scripts/common.sh@365 -- # decimal 2 00:04:42.035 23:58:56 -- scripts/common.sh@352 -- # local d=2 00:04:42.035 23:58:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.035 23:58:56 -- scripts/common.sh@354 -- # echo 2 00:04:42.035 23:58:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.035 23:58:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.035 23:58:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.035 23:58:56 -- scripts/common.sh@367 -- # return 0 00:04:42.035 23:58:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.035 23:58:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.035 --rc genhtml_branch_coverage=1 00:04:42.035 --rc genhtml_function_coverage=1 00:04:42.035 --rc genhtml_legend=1 00:04:42.035 --rc geninfo_all_blocks=1 00:04:42.035 --rc geninfo_unexecuted_blocks=1 00:04:42.035 00:04:42.036 ' 00:04:42.036 23:58:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.036 --rc genhtml_branch_coverage=1 00:04:42.036 --rc genhtml_function_coverage=1 00:04:42.036 --rc genhtml_legend=1 00:04:42.036 --rc geninfo_all_blocks=1 00:04:42.036 --rc geninfo_unexecuted_blocks=1 00:04:42.036 00:04:42.036 ' 00:04:42.036 23:58:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.036 --rc genhtml_branch_coverage=1 00:04:42.036 --rc genhtml_function_coverage=1 00:04:42.036 --rc genhtml_legend=1 00:04:42.036 --rc geninfo_all_blocks=1 00:04:42.036 --rc geninfo_unexecuted_blocks=1 00:04:42.036 00:04:42.036 ' 00:04:42.036 23:58:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.036 --rc genhtml_branch_coverage=1 00:04:42.036 --rc genhtml_function_coverage=1 00:04:42.036 --rc genhtml_legend=1 00:04:42.036 --rc geninfo_all_blocks=1 00:04:42.036 --rc geninfo_unexecuted_blocks=1 00:04:42.036 00:04:42.036 ' 00:04:42.036 23:58:56 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:42.036 23:58:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:42.036 23:58:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:42.036 23:58:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:42.036 23:58:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:42.036 23:58:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:42.036 23:58:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:42.036 23:58:56 -- setup/acl.sh@12 -- # devs=() 00:04:42.036 23:58:56 -- setup/acl.sh@12 -- # declare -a devs 00:04:42.036 23:58:56 -- setup/acl.sh@13 -- # drivers=() 00:04:42.036 23:58:56 -- setup/acl.sh@13 -- # declare -A drivers 00:04:42.036 23:58:56 -- setup/acl.sh@51 -- # setup reset 00:04:42.036 23:58:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.036 23:58:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.424 23:58:57 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:43.424 23:58:57 -- setup/acl.sh@16 -- # local dev driver 00:04:43.424 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.424 23:58:57 -- setup/acl.sh@15 -- # setup output status 00:04:43.424 23:58:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.424 23:58:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:43.425 Hugepages 00:04:43.425 node hugesize free / total 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # continue 00:04:43.425 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.425 00:04:43.425 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # continue 00:04:43.425 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:43.425 23:58:57 -- setup/acl.sh@20 -- # continue 00:04:43.425 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.425 23:58:57 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.425 23:58:57 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.425 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.425 23:58:57 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.425 23:58:57 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:43.425 23:58:57 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.425 23:58:57 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.425 23:58:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.686 23:58:58 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:43.686 23:58:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.686 23:58:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:43.686 23:58:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.686 23:58:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.686 23:58:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.686 23:58:58 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:43.686 23:58:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.686 23:58:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:43.686 23:58:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.686 23:58:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.686 23:58:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.686 23:58:58 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:43.686 23:58:58 -- setup/acl.sh@54 -- # run_test denied denied 00:04:43.686 23:58:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.686 23:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.686 23:58:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.686 ************************************ 00:04:43.686 START TEST denied 00:04:43.686 ************************************ 00:04:43.686 23:58:58 -- common/autotest_common.sh@1114 -- # denied 00:04:43.686 23:58:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:43.686 23:58:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:43.686 23:58:58 -- setup/acl.sh@38 -- # setup output config 00:04:43.686 23:58:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.686 23:58:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.074 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:45.074 23:58:59 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:45.074 23:58:59 -- setup/acl.sh@28 -- # local dev driver 00:04:45.074 23:58:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.074 23:58:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:45.074 23:58:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:45.074 23:58:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.074 23:58:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.074 23:58:59 -- setup/acl.sh@41 -- # setup reset 00:04:45.074 23:58:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.074 23:58:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.668 00:04:51.668 real 0m7.084s 00:04:51.668 user 0m0.702s 00:04:51.668 sys 0m1.189s 00:04:51.668 23:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.668 23:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.668 ************************************ 00:04:51.668 END TEST denied 00:04:51.668 ************************************ 00:04:51.668 23:59:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:51.668 23:59:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.668 23:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.668 23:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.668 ************************************ 00:04:51.668 START TEST allowed 00:04:51.668 ************************************ 00:04:51.668 23:59:05 -- common/autotest_common.sh@1114 -- # allowed 00:04:51.668 23:59:05 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:51.668 23:59:05 -- setup/acl.sh@45 -- # setup output config 00:04:51.668 23:59:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.668 23:59:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.668 23:59:05 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:51.930 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.930 23:59:06 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:51.930 23:59:06 -- setup/acl.sh@28 -- # local dev driver 00:04:51.930 23:59:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.930 23:59:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.930 23:59:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.930 23:59:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.930 23:59:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.930 23:59:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.930 23:59:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.930 23:59:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:51.930 23:59:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.930 23:59:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.930 23:59:06 -- setup/acl.sh@48 -- # setup reset 00:04:51.930 23:59:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.930 23:59:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.874 ************************************ 00:04:52.874 END TEST allowed 00:04:52.874 ************************************ 00:04:52.874 00:04:52.874 real 0m2.054s 00:04:52.874 user 0m0.769s 00:04:52.874 sys 0m1.047s 00:04:52.874 23:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.874 23:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.874 ************************************ 00:04:52.874 END TEST acl 00:04:52.874 ************************************ 00:04:52.874 00:04:52.874 real 0m10.935s 00:04:52.874 user 0m2.165s 00:04:52.874 sys 0m3.172s 00:04:52.874 23:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.874 23:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.874 23:59:07 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:52.874 23:59:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.874 23:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.874 23:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.874 ************************************ 00:04:52.874 START TEST hugepages 00:04:52.874 ************************************ 00:04:52.874 23:59:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:53.137 * Looking for test storage... 00:04:53.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.137 23:59:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.137 23:59:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.137 23:59:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.137 23:59:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.137 23:59:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.137 23:59:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.137 23:59:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.137 23:59:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.137 23:59:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.137 23:59:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.137 23:59:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.137 23:59:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.137 23:59:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.137 23:59:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.137 23:59:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.137 23:59:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.137 23:59:07 -- scripts/common.sh@344 -- # : 1 00:04:53.137 23:59:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.137 23:59:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.137 23:59:07 -- scripts/common.sh@364 -- # decimal 1 00:04:53.137 23:59:07 -- scripts/common.sh@352 -- # local d=1 00:04:53.137 23:59:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.137 23:59:07 -- scripts/common.sh@354 -- # echo 1 00:04:53.137 23:59:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.137 23:59:07 -- scripts/common.sh@365 -- # decimal 2 00:04:53.137 23:59:07 -- scripts/common.sh@352 -- # local d=2 00:04:53.137 23:59:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.137 23:59:07 -- scripts/common.sh@354 -- # echo 2 00:04:53.137 23:59:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.137 23:59:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.137 23:59:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.137 23:59:07 -- scripts/common.sh@367 -- # return 0 00:04:53.137 23:59:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.137 23:59:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.137 --rc genhtml_branch_coverage=1 00:04:53.137 --rc genhtml_function_coverage=1 00:04:53.137 --rc genhtml_legend=1 00:04:53.137 --rc geninfo_all_blocks=1 00:04:53.137 --rc geninfo_unexecuted_blocks=1 00:04:53.137 00:04:53.137 ' 00:04:53.137 23:59:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.137 --rc genhtml_branch_coverage=1 00:04:53.137 --rc genhtml_function_coverage=1 00:04:53.137 --rc genhtml_legend=1 00:04:53.137 --rc geninfo_all_blocks=1 00:04:53.137 --rc geninfo_unexecuted_blocks=1 00:04:53.137 00:04:53.137 ' 00:04:53.137 23:59:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.137 --rc genhtml_branch_coverage=1 00:04:53.137 --rc genhtml_function_coverage=1 00:04:53.137 --rc genhtml_legend=1 00:04:53.137 --rc geninfo_all_blocks=1 00:04:53.137 --rc geninfo_unexecuted_blocks=1 00:04:53.137 00:04:53.137 ' 00:04:53.137 23:59:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.137 --rc genhtml_branch_coverage=1 00:04:53.137 --rc genhtml_function_coverage=1 00:04:53.137 --rc genhtml_legend=1 00:04:53.137 --rc geninfo_all_blocks=1 00:04:53.137 --rc geninfo_unexecuted_blocks=1 00:04:53.137 00:04:53.137 ' 00:04:53.137 23:59:07 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:53.137 23:59:07 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:53.137 23:59:07 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:53.137 23:59:07 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:53.137 23:59:07 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:53.137 23:59:07 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:53.137 23:59:07 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:53.137 23:59:07 -- setup/common.sh@18 -- # local node= 00:04:53.137 23:59:07 -- setup/common.sh@19 -- # local var val 00:04:53.137 23:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.137 23:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.137 23:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.137 23:59:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.137 23:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.137 23:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.137 23:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 4381192 kB' 'MemAvailable: 7350848 kB' 'Buffers: 3696 kB' 'Cached: 3171292 kB' 'SwapCached: 0 kB' 'Active: 465460 kB' 'Inactive: 2825232 kB' 'Active(anon): 126240 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 117372 kB' 'Mapped: 50888 kB' 'Shmem: 10536 kB' 'KReclaimable: 84712 kB' 'Slab: 190056 kB' 'SReclaimable: 84712 kB' 'SUnreclaim: 105344 kB' 'KernelStack: 6848 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12409996 kB' 'Committed_AS: 330244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.137 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.137 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.138 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.138 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # continue 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.139 23:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.139 23:59:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.139 23:59:07 -- setup/common.sh@33 -- # echo 2048 00:04:53.139 23:59:07 -- setup/common.sh@33 -- # return 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:53.139 23:59:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:53.139 23:59:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:53.139 23:59:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:53.139 23:59:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:53.139 23:59:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:53.139 23:59:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:53.139 23:59:07 -- setup/hugepages.sh@207 -- # get_nodes 00:04:53.139 23:59:07 -- setup/hugepages.sh@27 -- # local node 00:04:53.139 23:59:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.139 23:59:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:53.139 23:59:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.139 23:59:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.139 23:59:07 -- setup/hugepages.sh@208 -- # clear_hp 00:04:53.139 23:59:07 -- setup/hugepages.sh@37 -- # local node hp 00:04:53.139 23:59:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:53.139 23:59:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.139 23:59:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:53.139 23:59:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:53.139 23:59:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:53.139 23:59:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:53.139 23:59:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.139 23:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.139 23:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.139 ************************************ 00:04:53.139 START TEST default_setup 00:04:53.139 ************************************ 00:04:53.139 23:59:07 -- common/autotest_common.sh@1114 -- # default_setup 00:04:53.139 23:59:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.139 23:59:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.139 23:59:07 -- setup/hugepages.sh@51 -- # shift 00:04:53.139 23:59:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.139 23:59:07 -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.139 23:59:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.139 23:59:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.139 23:59:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.139 23:59:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.139 23:59:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.139 23:59:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.139 23:59:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.139 23:59:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.139 23:59:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.139 23:59:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.139 23:59:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:53.139 23:59:07 -- setup/hugepages.sh@73 -- # return 0 00:04:53.139 23:59:07 -- setup/hugepages.sh@137 -- # setup output 00:04:53.139 23:59:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.139 23:59:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.344 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.344 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.344 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.344 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.344 23:59:08 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:54.344 23:59:08 -- setup/hugepages.sh@89 -- # local node 00:04:54.344 23:59:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.344 23:59:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.344 23:59:08 -- setup/hugepages.sh@92 -- # local surp 00:04:54.344 23:59:08 -- setup/hugepages.sh@93 -- # local resv 00:04:54.344 23:59:08 -- setup/hugepages.sh@94 -- # local anon 00:04:54.344 23:59:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.344 23:59:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.344 23:59:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.344 23:59:08 -- setup/common.sh@18 -- # local node= 00:04:54.344 23:59:08 -- setup/common.sh@19 -- # local var val 00:04:54.344 23:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.344 23:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.344 23:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.344 23:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.344 23:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.344 23:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.344 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6476196 kB' 'MemAvailable: 9445640 kB' 'Buffers: 3696 kB' 'Cached: 3171280 kB' 'SwapCached: 0 kB' 'Active: 467036 kB' 'Inactive: 2825256 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118928 kB' 'Mapped: 50784 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189732 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105492 kB' 'KernelStack: 6832 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.345 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.345 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.346 23:59:08 -- setup/common.sh@33 -- # echo 0 00:04:54.346 23:59:08 -- setup/common.sh@33 -- # return 0 00:04:54.346 23:59:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.346 23:59:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.346 23:59:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.346 23:59:08 -- setup/common.sh@18 -- # local node= 00:04:54.346 23:59:08 -- setup/common.sh@19 -- # local var val 00:04:54.346 23:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.346 23:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.346 23:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.346 23:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.346 23:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.346 23:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6476704 kB' 'MemAvailable: 9446148 kB' 'Buffers: 3696 kB' 'Cached: 3171280 kB' 'SwapCached: 0 kB' 'Active: 467036 kB' 'Inactive: 2825256 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118928 kB' 'Mapped: 50784 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189820 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105580 kB' 'KernelStack: 6800 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.346 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.346 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.347 23:59:08 -- setup/common.sh@33 -- # echo 0 00:04:54.347 23:59:08 -- setup/common.sh@33 -- # return 0 00:04:54.347 23:59:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.347 23:59:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.347 23:59:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.347 23:59:08 -- setup/common.sh@18 -- # local node= 00:04:54.347 23:59:08 -- setup/common.sh@19 -- # local var val 00:04:54.347 23:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.347 23:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.347 23:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.347 23:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.347 23:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.347 23:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6476704 kB' 'MemAvailable: 9446152 kB' 'Buffers: 3696 kB' 'Cached: 3171280 kB' 'SwapCached: 0 kB' 'Active: 466568 kB' 'Inactive: 2825260 kB' 'Active(anon): 127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 50784 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189832 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105592 kB' 'KernelStack: 6848 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.347 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.347 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.348 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.348 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.349 23:59:08 -- setup/common.sh@33 -- # echo 0 00:04:54.349 23:59:08 -- setup/common.sh@33 -- # return 0 00:04:54.349 nr_hugepages=1024 00:04:54.349 resv_hugepages=0 00:04:54.349 surplus_hugepages=0 00:04:54.349 anon_hugepages=0 00:04:54.349 23:59:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.349 23:59:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.349 23:59:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.349 23:59:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.349 23:59:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.349 23:59:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.349 23:59:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.349 23:59:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.349 23:59:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.349 23:59:08 -- setup/common.sh@18 -- # local node= 00:04:54.349 23:59:08 -- setup/common.sh@19 -- # local var val 00:04:54.349 23:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.349 23:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.349 23:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.349 23:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.349 23:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.349 23:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6476704 kB' 'MemAvailable: 9446152 kB' 'Buffers: 3696 kB' 'Cached: 3171280 kB' 'SwapCached: 0 kB' 'Active: 466652 kB' 'Inactive: 2825260 kB' 'Active(anon): 127432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118576 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189800 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105560 kB' 'KernelStack: 6832 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.349 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.349 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.350 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.350 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.351 23:59:08 -- setup/common.sh@33 -- # echo 1024 00:04:54.351 23:59:08 -- setup/common.sh@33 -- # return 0 00:04:54.351 23:59:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.351 23:59:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.351 23:59:08 -- setup/hugepages.sh@27 -- # local node 00:04:54.351 23:59:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.351 23:59:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.351 23:59:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.351 23:59:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.351 23:59:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.351 23:59:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.351 23:59:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.351 23:59:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.351 23:59:08 -- setup/common.sh@18 -- # local node=0 00:04:54.351 23:59:08 -- setup/common.sh@19 -- # local var val 00:04:54.351 23:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.351 23:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.351 23:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.351 23:59:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.351 23:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.351 23:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6477484 kB' 'MemUsed: 5759608 kB' 'SwapCached: 0 kB' 'Active: 466696 kB' 'Inactive: 2825260 kB' 'Active(anon): 127476 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 3174976 kB' 'Mapped: 50716 kB' 'AnonPages: 118828 kB' 'Shmem: 10496 kB' 'KernelStack: 6884 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189800 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.351 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.351 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # continue 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.352 23:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.352 23:59:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.352 23:59:08 -- setup/common.sh@33 -- # echo 0 00:04:54.352 23:59:08 -- setup/common.sh@33 -- # return 0 00:04:54.352 node0=1024 expecting 1024 00:04:54.352 ************************************ 00:04:54.352 END TEST default_setup 00:04:54.352 ************************************ 00:04:54.352 23:59:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.352 23:59:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.352 23:59:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.352 23:59:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.352 23:59:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.352 23:59:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.352 00:04:54.352 real 0m1.297s 00:04:54.352 user 0m0.492s 00:04:54.352 sys 0m0.607s 00:04:54.352 23:59:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.352 23:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.613 23:59:08 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:54.613 23:59:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.613 23:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.613 23:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.613 ************************************ 00:04:54.613 START TEST per_node_1G_alloc 00:04:54.613 ************************************ 00:04:54.613 23:59:08 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:54.613 23:59:08 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:54.613 23:59:08 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:54.613 23:59:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.613 23:59:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.613 23:59:08 -- setup/hugepages.sh@51 -- # shift 00:04:54.613 23:59:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.613 23:59:08 -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.613 23:59:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.613 23:59:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.613 23:59:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.613 23:59:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.613 23:59:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.613 23:59:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.613 23:59:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.613 23:59:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.613 23:59:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.613 23:59:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.613 23:59:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.613 23:59:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.613 23:59:08 -- setup/hugepages.sh@73 -- # return 0 00:04:54.613 23:59:08 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:54.613 23:59:08 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:54.613 23:59:08 -- setup/hugepages.sh@146 -- # setup output 00:04:54.613 23:59:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.613 23:59:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.874 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.874 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.874 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.874 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.874 23:59:09 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:54.874 23:59:09 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:54.874 23:59:09 -- setup/hugepages.sh@89 -- # local node 00:04:54.874 23:59:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.874 23:59:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.874 23:59:09 -- setup/hugepages.sh@92 -- # local surp 00:04:54.874 23:59:09 -- setup/hugepages.sh@93 -- # local resv 00:04:54.874 23:59:09 -- setup/hugepages.sh@94 -- # local anon 00:04:54.874 23:59:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.139 23:59:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.139 23:59:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.139 23:59:09 -- setup/common.sh@18 -- # local node= 00:04:55.139 23:59:09 -- setup/common.sh@19 -- # local var val 00:04:55.139 23:59:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.139 23:59:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.139 23:59:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.139 23:59:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.139 23:59:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.139 23:59:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7529064 kB' 'MemAvailable: 10498516 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467672 kB' 'Inactive: 2825264 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119268 kB' 'Mapped: 50908 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189820 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105580 kB' 'KernelStack: 6828 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.139 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.139 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.140 23:59:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.140 23:59:09 -- setup/common.sh@33 -- # echo 0 00:04:55.140 23:59:09 -- setup/common.sh@33 -- # return 0 00:04:55.140 23:59:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.140 23:59:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.140 23:59:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.140 23:59:09 -- setup/common.sh@18 -- # local node= 00:04:55.140 23:59:09 -- setup/common.sh@19 -- # local var val 00:04:55.140 23:59:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.140 23:59:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.140 23:59:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.140 23:59:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.140 23:59:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.140 23:59:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.140 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.140 23:59:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7528812 kB' 'MemAvailable: 10498264 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466964 kB' 'Inactive: 2825264 kB' 'Active(anon): 127744 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189836 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105596 kB' 'KernelStack: 6832 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.141 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.141 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.142 23:59:09 -- setup/common.sh@33 -- # echo 0 00:04:55.142 23:59:09 -- setup/common.sh@33 -- # return 0 00:04:55.142 23:59:09 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.142 23:59:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.142 23:59:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.142 23:59:09 -- setup/common.sh@18 -- # local node= 00:04:55.142 23:59:09 -- setup/common.sh@19 -- # local var val 00:04:55.142 23:59:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.142 23:59:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.142 23:59:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.142 23:59:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.142 23:59:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.142 23:59:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7528812 kB' 'MemAvailable: 10498264 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467148 kB' 'Inactive: 2825264 kB' 'Active(anon): 127928 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189824 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105584 kB' 'KernelStack: 6816 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.142 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.142 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.143 23:59:09 -- setup/common.sh@33 -- # echo 0 00:04:55.143 23:59:09 -- setup/common.sh@33 -- # return 0 00:04:55.143 nr_hugepages=512 00:04:55.143 resv_hugepages=0 00:04:55.143 23:59:09 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.143 23:59:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:55.143 23:59:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.143 surplus_hugepages=0 00:04:55.143 23:59:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.143 anon_hugepages=0 00:04:55.143 23:59:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.143 23:59:09 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.143 23:59:09 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:55.143 23:59:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.143 23:59:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.143 23:59:09 -- setup/common.sh@18 -- # local node= 00:04:55.143 23:59:09 -- setup/common.sh@19 -- # local var val 00:04:55.143 23:59:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.143 23:59:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.143 23:59:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.143 23:59:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.143 23:59:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.143 23:59:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7528812 kB' 'MemAvailable: 10498264 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466896 kB' 'Inactive: 2825264 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118748 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189820 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105580 kB' 'KernelStack: 6816 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.143 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.143 23:59:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.144 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.144 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.145 23:59:09 -- setup/common.sh@33 -- # echo 512 00:04:55.145 23:59:09 -- setup/common.sh@33 -- # return 0 00:04:55.145 23:59:09 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:55.145 23:59:09 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.145 23:59:09 -- setup/hugepages.sh@27 -- # local node 00:04:55.145 23:59:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.145 23:59:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.145 23:59:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.145 23:59:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.145 23:59:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.145 23:59:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.145 23:59:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.145 23:59:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.145 23:59:09 -- setup/common.sh@18 -- # local node=0 00:04:55.145 23:59:09 -- setup/common.sh@19 -- # local var val 00:04:55.145 23:59:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.145 23:59:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.145 23:59:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.145 23:59:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.145 23:59:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.145 23:59:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7528812 kB' 'MemUsed: 4708280 kB' 'SwapCached: 0 kB' 'Active: 466896 kB' 'Inactive: 2825264 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 50664 kB' 'AnonPages: 118752 kB' 'Shmem: 10496 kB' 'KernelStack: 6816 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189820 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.145 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.145 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # continue 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.146 23:59:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.146 23:59:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.146 23:59:09 -- setup/common.sh@33 -- # echo 0 00:04:55.146 23:59:09 -- setup/common.sh@33 -- # return 0 00:04:55.146 node0=512 expecting 512 00:04:55.146 ************************************ 00:04:55.146 END TEST per_node_1G_alloc 00:04:55.146 ************************************ 00:04:55.146 23:59:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.146 23:59:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.146 23:59:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.146 23:59:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.146 23:59:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.146 00:04:55.146 real 0m0.615s 00:04:55.146 user 0m0.258s 00:04:55.146 sys 0m0.356s 00:04:55.146 23:59:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.146 23:59:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.146 23:59:09 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:55.146 23:59:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.146 23:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.146 23:59:09 -- common/autotest_common.sh@10 -- # set +x 00:04:55.146 ************************************ 00:04:55.146 START TEST even_2G_alloc 00:04:55.146 ************************************ 00:04:55.146 23:59:09 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:55.146 23:59:09 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:55.146 23:59:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.146 23:59:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.146 23:59:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.146 23:59:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.146 23:59:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.146 23:59:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.146 23:59:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.146 23:59:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.146 23:59:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.146 23:59:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:55.146 23:59:09 -- setup/hugepages.sh@83 -- # : 0 00:04:55.146 23:59:09 -- setup/hugepages.sh@84 -- # : 0 00:04:55.146 23:59:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.146 23:59:09 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:55.146 23:59:09 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:55.146 23:59:09 -- setup/hugepages.sh@153 -- # setup output 00:04:55.146 23:59:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.146 23:59:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.722 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.722 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.722 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.722 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.722 23:59:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:55.722 23:59:10 -- setup/hugepages.sh@89 -- # local node 00:04:55.722 23:59:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.722 23:59:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.722 23:59:10 -- setup/hugepages.sh@92 -- # local surp 00:04:55.722 23:59:10 -- setup/hugepages.sh@93 -- # local resv 00:04:55.722 23:59:10 -- setup/hugepages.sh@94 -- # local anon 00:04:55.722 23:59:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.722 23:59:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.722 23:59:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.722 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:55.722 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:55.722 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.722 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.722 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.722 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.722 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.722 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.722 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6486380 kB' 'MemAvailable: 9455832 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467492 kB' 'Inactive: 2825264 kB' 'Active(anon): 128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50892 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189860 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105620 kB' 'KernelStack: 7004 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55944 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.724 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:55.724 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:55.724 23:59:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.724 23:59:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.724 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.724 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:55.724 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:55.724 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.724 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.724 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.724 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.724 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.724 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6486380 kB' 'MemAvailable: 9455832 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467244 kB' 'Inactive: 2825264 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 50944 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189844 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105604 kB' 'KernelStack: 6940 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 337192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55928 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:55.725 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:55.725 23:59:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.725 23:59:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.725 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.725 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:55.725 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:55.725 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.725 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.725 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.725 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.725 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.725 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6486380 kB' 'MemAvailable: 9455832 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467128 kB' 'Inactive: 2825264 kB' 'Active(anon): 127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119012 kB' 'Mapped: 50936 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189860 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105620 kB' 'KernelStack: 6864 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 334752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55928 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:55.726 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:55.726 23:59:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.726 nr_hugepages=1024 00:04:55.726 23:59:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.726 resv_hugepages=0 00:04:55.727 23:59:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.727 surplus_hugepages=0 00:04:55.727 23:59:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.727 anon_hugepages=0 00:04:55.727 23:59:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.727 23:59:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.727 23:59:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.727 23:59:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.727 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.727 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:55.727 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:55.727 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.727 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.727 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.727 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.727 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.727 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6486128 kB' 'MemAvailable: 9455580 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466952 kB' 'Inactive: 2825264 kB' 'Active(anon): 127732 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118836 kB' 'Mapped: 50864 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189852 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105612 kB' 'KernelStack: 6816 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.727 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.727 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 23:59:10 -- setup/common.sh@33 -- # echo 1024 00:04:55.728 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:55.728 23:59:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.728 23:59:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.728 23:59:10 -- setup/hugepages.sh@27 -- # local node 00:04:55.728 23:59:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.728 23:59:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.728 23:59:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.728 23:59:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.728 23:59:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.728 23:59:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.728 23:59:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.728 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.728 23:59:10 -- setup/common.sh@18 -- # local node=0 00:04:55.728 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:55.728 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.728 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.728 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.728 23:59:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.728 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.728 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6486128 kB' 'MemUsed: 5750964 kB' 'SwapCached: 0 kB' 'Active: 466824 kB' 'Inactive: 2825264 kB' 'Active(anon): 127604 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 50864 kB' 'AnonPages: 118964 kB' 'Shmem: 10496 kB' 'KernelStack: 6800 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189844 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.728 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # continue 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.729 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:55.729 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:55.729 23:59:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.729 node0=1024 expecting 1024 00:04:55.729 23:59:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.729 23:59:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.729 23:59:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.729 23:59:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.729 00:04:55.729 real 0m0.578s 00:04:55.729 user 0m0.227s 00:04:55.729 sys 0m0.373s 00:04:55.729 23:59:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.729 23:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.729 ************************************ 00:04:55.729 END TEST even_2G_alloc 00:04:55.729 ************************************ 00:04:55.729 23:59:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:55.729 23:59:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.729 23:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.729 23:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.729 ************************************ 00:04:55.729 START TEST odd_alloc 00:04:55.729 ************************************ 00:04:55.729 23:59:10 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:55.729 23:59:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:55.729 23:59:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:55.729 23:59:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:55.729 23:59:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.729 23:59:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.729 23:59:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.729 23:59:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:55.729 23:59:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.729 23:59:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.729 23:59:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.729 23:59:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:55.729 23:59:10 -- setup/hugepages.sh@83 -- # : 0 00:04:55.729 23:59:10 -- setup/hugepages.sh@84 -- # : 0 00:04:55.729 23:59:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.729 23:59:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:55.730 23:59:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:55.730 23:59:10 -- setup/hugepages.sh@160 -- # setup output 00:04:55.730 23:59:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.730 23:59:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.307 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.307 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.307 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.307 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.307 23:59:10 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:56.307 23:59:10 -- setup/hugepages.sh@89 -- # local node 00:04:56.307 23:59:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.308 23:59:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.308 23:59:10 -- setup/hugepages.sh@92 -- # local surp 00:04:56.308 23:59:10 -- setup/hugepages.sh@93 -- # local resv 00:04:56.308 23:59:10 -- setup/hugepages.sh@94 -- # local anon 00:04:56.308 23:59:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.308 23:59:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.308 23:59:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.308 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:56.308 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:56.308 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.308 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.308 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.308 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.308 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.308 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6487316 kB' 'MemAvailable: 9456768 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467356 kB' 'Inactive: 2825264 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119272 kB' 'Mapped: 50720 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189832 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105592 kB' 'KernelStack: 6860 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.308 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.308 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.309 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:56.309 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:56.309 23:59:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.309 23:59:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.309 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.309 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:56.309 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:56.309 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.309 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.309 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.309 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.309 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.309 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6487064 kB' 'MemAvailable: 9456516 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466960 kB' 'Inactive: 2825264 kB' 'Active(anon): 127740 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119084 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189836 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105596 kB' 'KernelStack: 6824 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.309 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.309 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.310 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:56.310 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:56.310 23:59:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.310 23:59:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.310 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.310 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:56.310 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:56.310 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.310 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.310 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.310 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.310 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.310 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6487064 kB' 'MemAvailable: 9456516 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466724 kB' 'Inactive: 2825264 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118632 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189808 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105568 kB' 'KernelStack: 6816 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.310 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.310 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.311 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.311 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.312 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:56.312 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:56.312 23:59:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.312 nr_hugepages=1025 00:04:56.312 23:59:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:56.312 resv_hugepages=0 00:04:56.312 23:59:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.312 surplus_hugepages=0 00:04:56.312 23:59:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.312 anon_hugepages=0 00:04:56.312 23:59:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.312 23:59:10 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.312 23:59:10 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:56.312 23:59:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.312 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.312 23:59:10 -- setup/common.sh@18 -- # local node= 00:04:56.312 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:56.312 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.312 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.312 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.312 23:59:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.312 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.312 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6487064 kB' 'MemAvailable: 9456516 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466672 kB' 'Inactive: 2825264 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118536 kB' 'Mapped: 50664 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189808 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105568 kB' 'KernelStack: 6800 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.312 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.312 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.313 23:59:10 -- setup/common.sh@33 -- # echo 1025 00:04:56.313 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:56.313 23:59:10 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.313 23:59:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.313 23:59:10 -- setup/hugepages.sh@27 -- # local node 00:04:56.313 23:59:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.313 23:59:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:56.313 23:59:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.313 23:59:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.313 23:59:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.313 23:59:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.313 23:59:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.313 23:59:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.313 23:59:10 -- setup/common.sh@18 -- # local node=0 00:04:56.313 23:59:10 -- setup/common.sh@19 -- # local var val 00:04:56.313 23:59:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.313 23:59:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.313 23:59:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.313 23:59:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.313 23:59:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.313 23:59:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6487064 kB' 'MemUsed: 5750028 kB' 'SwapCached: 0 kB' 'Active: 466736 kB' 'Inactive: 2825264 kB' 'Active(anon): 127516 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 50664 kB' 'AnonPages: 118852 kB' 'Shmem: 10496 kB' 'KernelStack: 6848 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189808 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.313 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.313 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # continue 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.314 23:59:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.314 23:59:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.314 23:59:10 -- setup/common.sh@33 -- # echo 0 00:04:56.314 23:59:10 -- setup/common.sh@33 -- # return 0 00:04:56.314 23:59:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.314 23:59:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.314 23:59:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.314 23:59:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.314 node0=1025 expecting 1025 00:04:56.314 23:59:10 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:56.314 23:59:10 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:56.314 00:04:56.314 real 0m0.571s 00:04:56.314 user 0m0.246s 00:04:56.314 sys 0m0.348s 00:04:56.314 23:59:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.314 23:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.314 ************************************ 00:04:56.314 END TEST odd_alloc 00:04:56.314 ************************************ 00:04:56.630 23:59:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:56.630 23:59:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.630 23:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.630 23:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 ************************************ 00:04:56.630 START TEST custom_alloc 00:04:56.630 ************************************ 00:04:56.630 23:59:10 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:56.630 23:59:10 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:56.630 23:59:10 -- setup/hugepages.sh@169 -- # local node 00:04:56.630 23:59:10 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:56.630 23:59:10 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:56.630 23:59:10 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:56.630 23:59:10 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:56.630 23:59:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.630 23:59:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.630 23:59:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.630 23:59:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.630 23:59:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.630 23:59:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.630 23:59:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.630 23:59:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.630 23:59:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.630 23:59:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.630 23:59:10 -- setup/hugepages.sh@83 -- # : 0 00:04:56.630 23:59:10 -- setup/hugepages.sh@84 -- # : 0 00:04:56.630 23:59:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:56.630 23:59:10 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.630 23:59:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.630 23:59:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.630 23:59:10 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:56.630 23:59:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.630 23:59:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.630 23:59:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.630 23:59:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.630 23:59:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.630 23:59:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.630 23:59:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.631 23:59:10 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:56.631 23:59:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.631 23:59:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.631 23:59:10 -- setup/hugepages.sh@78 -- # return 0 00:04:56.631 23:59:10 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:56.631 23:59:10 -- setup/hugepages.sh@187 -- # setup output 00:04:56.631 23:59:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.631 23:59:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.893 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.893 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.893 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.893 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.893 23:59:11 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:56.893 23:59:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:56.893 23:59:11 -- setup/hugepages.sh@89 -- # local node 00:04:56.893 23:59:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.893 23:59:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.893 23:59:11 -- setup/hugepages.sh@92 -- # local surp 00:04:56.893 23:59:11 -- setup/hugepages.sh@93 -- # local resv 00:04:56.893 23:59:11 -- setup/hugepages.sh@94 -- # local anon 00:04:56.893 23:59:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.893 23:59:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.893 23:59:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.893 23:59:11 -- setup/common.sh@18 -- # local node= 00:04:56.893 23:59:11 -- setup/common.sh@19 -- # local var val 00:04:56.893 23:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.893 23:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.893 23:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.893 23:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.893 23:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.893 23:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7535352 kB' 'MemAvailable: 10504804 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467108 kB' 'Inactive: 2825264 kB' 'Active(anon): 127888 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119188 kB' 'Mapped: 50684 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189912 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105672 kB' 'KernelStack: 6832 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.893 23:59:11 -- setup/common.sh@33 -- # echo 0 00:04:56.893 23:59:11 -- setup/common.sh@33 -- # return 0 00:04:56.893 23:59:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.893 23:59:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.893 23:59:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.893 23:59:11 -- setup/common.sh@18 -- # local node= 00:04:56.893 23:59:11 -- setup/common.sh@19 -- # local var val 00:04:56.893 23:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.893 23:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.893 23:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.893 23:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.893 23:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.893 23:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.893 23:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7535352 kB' 'MemAvailable: 10504804 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466792 kB' 'Inactive: 2825264 kB' 'Active(anon): 127572 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189912 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105672 kB' 'KernelStack: 6800 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.893 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.893 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.894 23:59:11 -- setup/common.sh@33 -- # echo 0 00:04:56.894 23:59:11 -- setup/common.sh@33 -- # return 0 00:04:56.894 23:59:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.894 23:59:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.894 23:59:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.894 23:59:11 -- setup/common.sh@18 -- # local node= 00:04:56.894 23:59:11 -- setup/common.sh@19 -- # local var val 00:04:56.894 23:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.894 23:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.894 23:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.894 23:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.894 23:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.894 23:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7535352 kB' 'MemAvailable: 10504804 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466908 kB' 'Inactive: 2825264 kB' 'Active(anon): 127688 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118992 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189912 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105672 kB' 'KernelStack: 6784 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.894 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.894 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.895 23:59:11 -- setup/common.sh@33 -- # echo 0 00:04:56.895 23:59:11 -- setup/common.sh@33 -- # return 0 00:04:56.895 23:59:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.895 23:59:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:56.895 nr_hugepages=512 00:04:56.895 resv_hugepages=0 00:04:56.895 23:59:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.895 surplus_hugepages=0 00:04:56.895 23:59:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.895 anon_hugepages=0 00:04:56.895 23:59:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.895 23:59:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.895 23:59:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:56.895 23:59:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.895 23:59:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.895 23:59:11 -- setup/common.sh@18 -- # local node= 00:04:56.895 23:59:11 -- setup/common.sh@19 -- # local var val 00:04:56.895 23:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.895 23:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.895 23:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.895 23:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.895 23:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.895 23:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7535352 kB' 'MemAvailable: 10504804 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466712 kB' 'Inactive: 2825264 kB' 'Active(anon): 127492 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118792 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189912 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105672 kB' 'KernelStack: 6768 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 335120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.895 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.895 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.896 23:59:11 -- setup/common.sh@33 -- # echo 512 00:04:56.896 23:59:11 -- setup/common.sh@33 -- # return 0 00:04:56.896 23:59:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.896 23:59:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.896 23:59:11 -- setup/hugepages.sh@27 -- # local node 00:04:56.896 23:59:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.896 23:59:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.896 23:59:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.896 23:59:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.896 23:59:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.896 23:59:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.896 23:59:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.896 23:59:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.896 23:59:11 -- setup/common.sh@18 -- # local node=0 00:04:56.896 23:59:11 -- setup/common.sh@19 -- # local var val 00:04:56.896 23:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.896 23:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.896 23:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.896 23:59:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.896 23:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.896 23:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7534848 kB' 'MemUsed: 4702244 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 2825264 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 50668 kB' 'AnonPages: 118760 kB' 'Shmem: 10496 kB' 'KernelStack: 6820 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189912 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.896 23:59:11 -- setup/common.sh@32 -- # continue 00:04:56.896 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # continue 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.156 23:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.156 23:59:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.156 23:59:11 -- setup/common.sh@33 -- # echo 0 00:04:57.156 23:59:11 -- setup/common.sh@33 -- # return 0 00:04:57.156 23:59:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.156 23:59:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.156 23:59:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.156 23:59:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.156 node0=512 expecting 512 00:04:57.156 23:59:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.156 23:59:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:57.156 00:04:57.156 real 0m0.569s 00:04:57.156 user 0m0.256s 00:04:57.156 sys 0m0.338s 00:04:57.156 23:59:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.157 23:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 ************************************ 00:04:57.157 END TEST custom_alloc 00:04:57.157 ************************************ 00:04:57.157 23:59:11 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:57.157 23:59:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.157 23:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.157 23:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 ************************************ 00:04:57.157 START TEST no_shrink_alloc 00:04:57.157 ************************************ 00:04:57.157 23:59:11 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:57.157 23:59:11 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:57.157 23:59:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.157 23:59:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:57.157 23:59:11 -- setup/hugepages.sh@51 -- # shift 00:04:57.157 23:59:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:57.157 23:59:11 -- setup/hugepages.sh@52 -- # local node_ids 00:04:57.157 23:59:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.157 23:59:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.157 23:59:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:57.157 23:59:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:57.157 23:59:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.157 23:59:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.157 23:59:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.157 23:59:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.157 23:59:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.157 23:59:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:57.157 23:59:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.157 23:59:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:57.157 23:59:11 -- setup/hugepages.sh@73 -- # return 0 00:04:57.157 23:59:11 -- setup/hugepages.sh@198 -- # setup output 00:04:57.157 23:59:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.157 23:59:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.417 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.417 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.417 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.417 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.417 23:59:12 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:57.417 23:59:12 -- setup/hugepages.sh@89 -- # local node 00:04:57.418 23:59:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.418 23:59:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.418 23:59:12 -- setup/hugepages.sh@92 -- # local surp 00:04:57.418 23:59:12 -- setup/hugepages.sh@93 -- # local resv 00:04:57.418 23:59:12 -- setup/hugepages.sh@94 -- # local anon 00:04:57.418 23:59:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.418 23:59:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.418 23:59:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.418 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:57.681 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:57.681 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.681 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.681 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.681 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.681 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.681 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491260 kB' 'MemAvailable: 9460712 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467008 kB' 'Inactive: 2825264 kB' 'Active(anon): 127788 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118828 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189908 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105668 kB' 'KernelStack: 6832 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.681 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.681 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.682 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:57.682 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:57.682 23:59:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.682 23:59:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.682 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.682 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:57.682 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:57.682 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.682 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.682 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.682 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.682 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.682 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491260 kB' 'MemAvailable: 9460712 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 467008 kB' 'Inactive: 2825264 kB' 'Active(anon): 127788 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118848 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189908 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105668 kB' 'KernelStack: 6832 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.682 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.682 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.683 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:57.683 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:57.683 23:59:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.683 23:59:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.683 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.683 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:57.683 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:57.683 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.683 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.683 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.683 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.683 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.683 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491260 kB' 'MemAvailable: 9460712 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466752 kB' 'Inactive: 2825264 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189908 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105668 kB' 'KernelStack: 6816 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.683 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.683 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.683 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:57.683 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:57.683 23:59:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.683 nr_hugepages=1024 00:04:57.683 23:59:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.683 resv_hugepages=0 00:04:57.683 23:59:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.683 surplus_hugepages=0 00:04:57.683 23:59:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.683 anon_hugepages=0 00:04:57.683 23:59:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.683 23:59:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.683 23:59:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.683 23:59:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.683 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.683 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:57.683 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:57.683 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.683 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.683 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.683 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.683 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.683 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.684 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491260 kB' 'MemAvailable: 9460712 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466952 kB' 'Inactive: 2825264 kB' 'Active(anon): 127732 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50668 kB' 'Shmem: 10496 kB' 'KReclaimable: 84240 kB' 'Slab: 189908 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105668 kB' 'KernelStack: 6800 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 335320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.684 23:59:12 -- setup/common.sh@33 -- # echo 1024 00:04:57.684 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:57.684 23:59:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.684 23:59:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.684 23:59:12 -- setup/hugepages.sh@27 -- # local node 00:04:57.684 23:59:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.684 23:59:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.684 23:59:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.684 23:59:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.684 23:59:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.684 23:59:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.684 23:59:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.684 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.684 23:59:12 -- setup/common.sh@18 -- # local node=0 00:04:57.684 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:57.684 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.684 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.684 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.684 23:59:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.684 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.684 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491268 kB' 'MemUsed: 5745824 kB' 'SwapCached: 0 kB' 'Active: 467132 kB' 'Inactive: 2825264 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 50668 kB' 'AnonPages: 119020 kB' 'Shmem: 10496 kB' 'KernelStack: 6816 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84240 kB' 'Slab: 189904 kB' 'SReclaimable: 84240 kB' 'SUnreclaim: 105664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.684 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.684 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # continue 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.685 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.685 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.685 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:57.685 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:57.685 23:59:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.685 23:59:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.685 node0=1024 expecting 1024 00:04:57.685 23:59:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.685 23:59:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.685 23:59:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.685 23:59:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.685 23:59:12 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:57.685 23:59:12 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:57.685 23:59:12 -- setup/hugepages.sh@202 -- # setup output 00:04:57.685 23:59:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.685 23:59:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.209 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.209 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.209 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.209 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.209 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:58.209 23:59:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:58.209 23:59:12 -- setup/hugepages.sh@89 -- # local node 00:04:58.209 23:59:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.209 23:59:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.209 23:59:12 -- setup/hugepages.sh@92 -- # local surp 00:04:58.209 23:59:12 -- setup/hugepages.sh@93 -- # local resv 00:04:58.209 23:59:12 -- setup/hugepages.sh@94 -- # local anon 00:04:58.209 23:59:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.209 23:59:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.209 23:59:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.209 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:58.209 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:58.209 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.209 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.209 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.209 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.209 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.209 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6492144 kB' 'MemAvailable: 9461596 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466304 kB' 'Inactive: 2825264 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118184 kB' 'Mapped: 50088 kB' 'Shmem: 10496 kB' 'KReclaimable: 84236 kB' 'Slab: 189744 kB' 'SReclaimable: 84236 kB' 'SUnreclaim: 105508 kB' 'KernelStack: 6876 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 324296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.209 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.209 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.210 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:58.210 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:58.210 23:59:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.210 23:59:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.210 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.210 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:58.210 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:58.210 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.210 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.210 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.210 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.210 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.210 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491896 kB' 'MemAvailable: 9461348 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 466044 kB' 'Inactive: 2825264 kB' 'Active(anon): 126824 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117908 kB' 'Mapped: 49888 kB' 'Shmem: 10496 kB' 'KReclaimable: 84236 kB' 'Slab: 189740 kB' 'SReclaimable: 84236 kB' 'SUnreclaim: 105504 kB' 'KernelStack: 6816 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 324296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.210 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.210 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.211 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.211 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.212 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:58.212 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:58.212 23:59:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.212 23:59:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.212 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.212 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:58.212 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:58.212 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.212 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.212 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.212 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.212 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.212 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6492084 kB' 'MemAvailable: 9461536 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 465704 kB' 'Inactive: 2825264 kB' 'Active(anon): 126484 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117580 kB' 'Mapped: 49816 kB' 'Shmem: 10496 kB' 'KReclaimable: 84236 kB' 'Slab: 189740 kB' 'SReclaimable: 84236 kB' 'SUnreclaim: 105504 kB' 'KernelStack: 6752 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 324296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.212 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.212 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.213 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:58.213 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:58.213 nr_hugepages=1024 00:04:58.213 resv_hugepages=0 00:04:58.213 surplus_hugepages=0 00:04:58.213 anon_hugepages=0 00:04:58.213 23:59:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.213 23:59:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.213 23:59:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.213 23:59:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.213 23:59:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.213 23:59:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.213 23:59:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.213 23:59:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.213 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.213 23:59:12 -- setup/common.sh@18 -- # local node= 00:04:58.213 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:58.213 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.213 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.213 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.213 23:59:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.213 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.213 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6492084 kB' 'MemAvailable: 9461536 kB' 'Buffers: 3696 kB' 'Cached: 3171284 kB' 'SwapCached: 0 kB' 'Active: 465736 kB' 'Inactive: 2825264 kB' 'Active(anon): 126516 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117596 kB' 'Mapped: 49816 kB' 'Shmem: 10496 kB' 'KReclaimable: 84236 kB' 'Slab: 189740 kB' 'SReclaimable: 84236 kB' 'SUnreclaim: 105504 kB' 'KernelStack: 6752 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 324296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.213 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.213 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.214 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.214 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.215 23:59:12 -- setup/common.sh@33 -- # echo 1024 00:04:58.215 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:58.215 23:59:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.215 23:59:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.215 23:59:12 -- setup/hugepages.sh@27 -- # local node 00:04:58.215 23:59:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.215 23:59:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.215 23:59:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.215 23:59:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.215 23:59:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.215 23:59:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.215 23:59:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.215 23:59:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.215 23:59:12 -- setup/common.sh@18 -- # local node=0 00:04:58.215 23:59:12 -- setup/common.sh@19 -- # local var val 00:04:58.215 23:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.215 23:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.215 23:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.215 23:59:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.215 23:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.215 23:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6491832 kB' 'MemUsed: 5745260 kB' 'SwapCached: 0 kB' 'Active: 465480 kB' 'Inactive: 2825264 kB' 'Active(anon): 126260 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2825264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3174980 kB' 'Mapped: 49816 kB' 'AnonPages: 117336 kB' 'Shmem: 10496 kB' 'KernelStack: 6752 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84236 kB' 'Slab: 189740 kB' 'SReclaimable: 84236 kB' 'SUnreclaim: 105504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.215 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.215 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # continue 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.216 23:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.216 23:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.216 23:59:12 -- setup/common.sh@33 -- # echo 0 00:04:58.216 23:59:12 -- setup/common.sh@33 -- # return 0 00:04:58.216 node0=1024 expecting 1024 00:04:58.216 ************************************ 00:04:58.216 END TEST no_shrink_alloc 00:04:58.216 ************************************ 00:04:58.216 23:59:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.216 23:59:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.216 23:59:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.216 23:59:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.216 23:59:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.216 23:59:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.216 00:04:58.216 real 0m1.138s 00:04:58.216 user 0m0.528s 00:04:58.216 sys 0m0.645s 00:04:58.216 23:59:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.216 23:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.216 23:59:12 -- setup/hugepages.sh@217 -- # clear_hp 00:04:58.216 23:59:12 -- setup/hugepages.sh@37 -- # local node hp 00:04:58.216 23:59:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.216 23:59:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.216 23:59:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:58.216 23:59:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.216 23:59:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:58.216 23:59:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:58.216 23:59:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:58.216 ************************************ 00:04:58.216 END TEST hugepages 00:04:58.216 ************************************ 00:04:58.216 00:04:58.216 real 0m5.307s 00:04:58.216 user 0m2.174s 00:04:58.216 sys 0m2.931s 00:04:58.216 23:59:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.216 23:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.216 23:59:12 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:58.216 23:59:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.216 23:59:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.216 23:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.216 ************************************ 00:04:58.216 START TEST driver 00:04:58.216 ************************************ 00:04:58.216 23:59:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:58.477 * Looking for test storage... 00:04:58.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.477 23:59:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.477 23:59:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.477 23:59:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.477 23:59:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.477 23:59:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.477 23:59:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.477 23:59:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.477 23:59:12 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.477 23:59:12 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.477 23:59:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.477 23:59:12 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.477 23:59:12 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.477 23:59:12 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.477 23:59:12 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.477 23:59:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.477 23:59:12 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.477 23:59:12 -- scripts/common.sh@344 -- # : 1 00:04:58.477 23:59:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.477 23:59:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.477 23:59:12 -- scripts/common.sh@364 -- # decimal 1 00:04:58.477 23:59:12 -- scripts/common.sh@352 -- # local d=1 00:04:58.477 23:59:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.477 23:59:12 -- scripts/common.sh@354 -- # echo 1 00:04:58.477 23:59:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.477 23:59:12 -- scripts/common.sh@365 -- # decimal 2 00:04:58.477 23:59:12 -- scripts/common.sh@352 -- # local d=2 00:04:58.477 23:59:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.477 23:59:12 -- scripts/common.sh@354 -- # echo 2 00:04:58.477 23:59:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.477 23:59:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.477 23:59:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.477 23:59:12 -- scripts/common.sh@367 -- # return 0 00:04:58.477 23:59:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.477 23:59:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.477 --rc genhtml_branch_coverage=1 00:04:58.477 --rc genhtml_function_coverage=1 00:04:58.477 --rc genhtml_legend=1 00:04:58.477 --rc geninfo_all_blocks=1 00:04:58.477 --rc geninfo_unexecuted_blocks=1 00:04:58.477 00:04:58.477 ' 00:04:58.477 23:59:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.477 --rc genhtml_branch_coverage=1 00:04:58.477 --rc genhtml_function_coverage=1 00:04:58.477 --rc genhtml_legend=1 00:04:58.477 --rc geninfo_all_blocks=1 00:04:58.477 --rc geninfo_unexecuted_blocks=1 00:04:58.477 00:04:58.477 ' 00:04:58.477 23:59:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.477 --rc genhtml_branch_coverage=1 00:04:58.477 --rc genhtml_function_coverage=1 00:04:58.477 --rc genhtml_legend=1 00:04:58.477 --rc geninfo_all_blocks=1 00:04:58.477 --rc geninfo_unexecuted_blocks=1 00:04:58.477 00:04:58.477 ' 00:04:58.478 23:59:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.478 --rc genhtml_branch_coverage=1 00:04:58.478 --rc genhtml_function_coverage=1 00:04:58.478 --rc genhtml_legend=1 00:04:58.478 --rc geninfo_all_blocks=1 00:04:58.478 --rc geninfo_unexecuted_blocks=1 00:04:58.478 00:04:58.478 ' 00:04:58.478 23:59:12 -- setup/driver.sh@68 -- # setup reset 00:04:58.478 23:59:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.478 23:59:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.062 23:59:18 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.062 23:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.062 23:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.062 23:59:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.062 ************************************ 00:05:05.062 START TEST guess_driver 00:05:05.062 ************************************ 00:05:05.062 23:59:18 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:05.062 23:59:18 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.062 23:59:18 -- setup/driver.sh@47 -- # local fail=0 00:05:05.062 23:59:18 -- setup/driver.sh@49 -- # pick_driver 00:05:05.062 23:59:18 -- setup/driver.sh@36 -- # vfio 00:05:05.062 23:59:18 -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.062 23:59:18 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.062 23:59:18 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.062 23:59:18 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.062 23:59:18 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:05.062 23:59:18 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:05.062 23:59:18 -- setup/driver.sh@32 -- # return 1 00:05:05.062 23:59:18 -- setup/driver.sh@38 -- # uio 00:05:05.062 23:59:18 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:05.062 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:05.062 23:59:18 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.062 Looking for driver=uio_pci_generic 00:05:05.062 23:59:18 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:05.062 23:59:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.062 23:59:18 -- setup/driver.sh@45 -- # setup output config 00:05:05.062 23:59:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.062 23:59:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # continue 00:05:05.320 23:59:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.320 23:59:19 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.320 23:59:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.320 23:59:19 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.320 23:59:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.320 23:59:19 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.320 23:59:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.320 23:59:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.320 23:59:19 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.320 23:59:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.605 23:59:19 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.605 23:59:19 -- setup/driver.sh@65 -- # setup reset 00:05:05.605 23:59:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.605 23:59:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.167 00:05:12.167 real 0m6.943s 00:05:12.167 user 0m0.689s 00:05:12.167 sys 0m1.141s 00:05:12.167 23:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.167 ************************************ 00:05:12.167 END TEST guess_driver 00:05:12.168 ************************************ 00:05:12.168 23:59:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 ************************************ 00:05:12.168 END TEST driver 00:05:12.168 ************************************ 00:05:12.168 00:05:12.168 real 0m12.966s 00:05:12.168 user 0m1.045s 00:05:12.168 sys 0m1.860s 00:05:12.168 23:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.168 23:59:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 23:59:25 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.168 23:59:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.168 23:59:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 ************************************ 00:05:12.168 START TEST devices 00:05:12.168 ************************************ 00:05:12.168 23:59:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.168 * Looking for test storage... 00:05:12.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.168 23:59:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.168 23:59:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.168 23:59:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.168 23:59:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.168 23:59:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.168 23:59:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.168 23:59:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.168 23:59:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.168 23:59:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.168 23:59:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.168 23:59:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.168 23:59:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.168 23:59:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.168 23:59:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.168 23:59:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.168 23:59:25 -- scripts/common.sh@344 -- # : 1 00:05:12.168 23:59:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.168 23:59:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.168 23:59:25 -- scripts/common.sh@364 -- # decimal 1 00:05:12.168 23:59:25 -- scripts/common.sh@352 -- # local d=1 00:05:12.168 23:59:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.168 23:59:25 -- scripts/common.sh@354 -- # echo 1 00:05:12.168 23:59:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.168 23:59:25 -- scripts/common.sh@365 -- # decimal 2 00:05:12.168 23:59:25 -- scripts/common.sh@352 -- # local d=2 00:05:12.168 23:59:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.168 23:59:25 -- scripts/common.sh@354 -- # echo 2 00:05:12.168 23:59:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.168 23:59:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.168 23:59:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.168 23:59:25 -- scripts/common.sh@367 -- # return 0 00:05:12.168 23:59:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.168 --rc genhtml_branch_coverage=1 00:05:12.168 --rc genhtml_function_coverage=1 00:05:12.168 --rc genhtml_legend=1 00:05:12.168 --rc geninfo_all_blocks=1 00:05:12.168 --rc geninfo_unexecuted_blocks=1 00:05:12.168 00:05:12.168 ' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.168 --rc genhtml_branch_coverage=1 00:05:12.168 --rc genhtml_function_coverage=1 00:05:12.168 --rc genhtml_legend=1 00:05:12.168 --rc geninfo_all_blocks=1 00:05:12.168 --rc geninfo_unexecuted_blocks=1 00:05:12.168 00:05:12.168 ' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.168 --rc genhtml_branch_coverage=1 00:05:12.168 --rc genhtml_function_coverage=1 00:05:12.168 --rc genhtml_legend=1 00:05:12.168 --rc geninfo_all_blocks=1 00:05:12.168 --rc geninfo_unexecuted_blocks=1 00:05:12.168 00:05:12.168 ' 00:05:12.168 23:59:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.168 --rc genhtml_branch_coverage=1 00:05:12.168 --rc genhtml_function_coverage=1 00:05:12.168 --rc genhtml_legend=1 00:05:12.168 --rc geninfo_all_blocks=1 00:05:12.168 --rc geninfo_unexecuted_blocks=1 00:05:12.168 00:05:12.168 ' 00:05:12.168 23:59:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:12.168 23:59:25 -- setup/devices.sh@192 -- # setup reset 00:05:12.168 23:59:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.168 23:59:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.426 23:59:26 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:12.426 23:59:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:12.426 23:59:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:12.426 23:59:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.426 23:59:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:05:12.426 23:59:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:12.426 23:59:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.426 23:59:26 -- setup/devices.sh@196 -- # blocks=() 00:05:12.426 23:59:26 -- setup/devices.sh@196 -- # declare -a blocks 00:05:12.426 23:59:26 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:12.426 23:59:26 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:12.426 23:59:26 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:12.426 23:59:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.426 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:12.426 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:12.426 23:59:26 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:05:12.426 23:59:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:12.426 23:59:26 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:12.426 23:59:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:12.426 23:59:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:12.426 No valid GPT data, bailing 00:05:12.426 23:59:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.426 23:59:26 -- scripts/common.sh@393 -- # pt= 00:05:12.426 23:59:26 -- scripts/common.sh@394 -- # return 1 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:12.427 23:59:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:12.427 23:59:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:12.427 23:59:26 -- setup/common.sh@80 -- # echo 1073741824 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:12.427 23:59:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.427 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:12.427 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.427 23:59:26 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.427 23:59:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:12.427 23:59:26 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:12.427 23:59:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:12.427 No valid GPT data, bailing 00:05:12.427 23:59:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:12.427 23:59:26 -- scripts/common.sh@393 -- # pt= 00:05:12.427 23:59:26 -- scripts/common.sh@394 -- # return 1 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:12.427 23:59:26 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:12.427 23:59:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:12.427 23:59:26 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.427 23:59:26 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.427 23:59:26 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.427 23:59:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.427 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:12.427 23:59:26 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.427 23:59:26 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.427 23:59:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.427 23:59:26 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:12.427 23:59:26 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:12.427 23:59:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:12.427 No valid GPT data, bailing 00:05:12.427 23:59:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:12.427 23:59:27 -- scripts/common.sh@393 -- # pt= 00:05:12.427 23:59:27 -- scripts/common.sh@394 -- # return 1 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:12.685 23:59:27 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:12.685 23:59:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:12.685 23:59:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.685 23:59:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.685 23:59:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.685 23:59:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.685 23:59:27 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.685 23:59:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:12.685 23:59:27 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:12.685 23:59:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:12.685 No valid GPT data, bailing 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # pt= 00:05:12.685 23:59:27 -- scripts/common.sh@394 -- # return 1 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:12.685 23:59:27 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:12.685 23:59:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:12.685 23:59:27 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.685 23:59:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.685 23:59:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.685 23:59:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:12.685 23:59:27 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:12.685 23:59:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:12.685 23:59:27 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:05:12.685 23:59:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:12.685 No valid GPT data, bailing 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # pt= 00:05:12.685 23:59:27 -- scripts/common.sh@394 -- # return 1 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:12.685 23:59:27 -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:12.685 23:59:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:12.685 23:59:27 -- setup/common.sh@80 -- # echo 6343335936 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:12.685 23:59:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.685 23:59:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:12.685 23:59:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:12.685 23:59:27 -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:12.685 23:59:27 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:12.685 23:59:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:12.685 23:59:27 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:05:12.685 23:59:27 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:12.685 No valid GPT data, bailing 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:12.685 23:59:27 -- scripts/common.sh@393 -- # pt= 00:05:12.685 23:59:27 -- scripts/common.sh@394 -- # return 1 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:12.685 23:59:27 -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:12.685 23:59:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:12.685 23:59:27 -- setup/common.sh@80 -- # echo 5368709120 00:05:12.685 23:59:27 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:12.685 23:59:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.685 23:59:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:12.685 23:59:27 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:12.685 23:59:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:05:12.685 23:59:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:12.685 23:59:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.685 23:59:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.685 23:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.685 ************************************ 00:05:12.685 START TEST nvme_mount 00:05:12.685 ************************************ 00:05:12.685 23:59:27 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:12.685 23:59:27 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:05:12.685 23:59:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:05:12.685 23:59:27 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.685 23:59:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.685 23:59:27 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:05:12.685 23:59:27 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:12.685 23:59:27 -- setup/common.sh@40 -- # local part_no=1 00:05:12.685 23:59:27 -- setup/common.sh@41 -- # local size=1073741824 00:05:12.685 23:59:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:12.686 23:59:27 -- setup/common.sh@44 -- # parts=() 00:05:12.686 23:59:27 -- setup/common.sh@44 -- # local parts 00:05:12.686 23:59:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:12.686 23:59:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.686 23:59:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.686 23:59:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:12.686 23:59:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.686 23:59:27 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:12.686 23:59:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:12.686 23:59:27 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:05:14.070 Creating new GPT entries in memory. 00:05:14.070 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.070 other utilities. 00:05:14.070 23:59:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.070 23:59:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.070 23:59:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.070 23:59:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.070 23:59:28 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:15.009 Creating new GPT entries in memory. 00:05:15.009 The operation has completed successfully. 00:05:15.009 23:59:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:15.009 23:59:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.009 23:59:29 -- setup/common.sh@62 -- # wait 66180 00:05:15.009 23:59:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.009 23:59:29 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:15.009 23:59:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.009 23:59:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:15.009 23:59:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:15.009 23:59:29 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.009 23:59:29 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.009 23:59:29 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:15.009 23:59:29 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:15.009 23:59:29 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.009 23:59:29 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.009 23:59:29 -- setup/devices.sh@53 -- # local found=0 00:05:15.009 23:59:29 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.009 23:59:29 -- setup/devices.sh@56 -- # : 00:05:15.009 23:59:29 -- setup/devices.sh@59 -- # local pci status 00:05:15.009 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.009 23:59:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:15.009 23:59:29 -- setup/devices.sh@47 -- # setup output config 00:05:15.009 23:59:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.009 23:59:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.009 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.009 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.009 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.009 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.269 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.269 23:59:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:15.269 23:59:29 -- setup/devices.sh@63 -- # found=1 00:05:15.269 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.269 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.269 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.269 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.269 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.528 23:59:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.528 23:59:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.528 23:59:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.528 23:59:29 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.529 23:59:29 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.529 23:59:29 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.529 23:59:29 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.529 23:59:29 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.529 23:59:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.529 23:59:29 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.529 23:59:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:15.529 23:59:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:15.529 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.529 23:59:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:15.529 23:59:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:15.788 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.788 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:15.788 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.788 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:15.788 23:59:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:15.788 23:59:30 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:15.788 23:59:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.788 23:59:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:15.788 23:59:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:15.788 23:59:30 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.788 23:59:30 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.788 23:59:30 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:15.788 23:59:30 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:15.788 23:59:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.788 23:59:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.788 23:59:30 -- setup/devices.sh@53 -- # local found=0 00:05:15.789 23:59:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.789 23:59:30 -- setup/devices.sh@56 -- # : 00:05:15.789 23:59:30 -- setup/devices.sh@59 -- # local pci status 00:05:15.789 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.789 23:59:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:15.789 23:59:30 -- setup/devices.sh@47 -- # setup output config 00:05:15.789 23:59:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.789 23:59:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.789 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.789 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.049 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.049 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.310 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.310 23:59:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:16.310 23:59:30 -- setup/devices.sh@63 -- # found=1 00:05:16.310 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.310 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.310 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.310 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.310 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.310 23:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.310 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.571 23:59:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.571 23:59:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.571 23:59:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.571 23:59:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.571 23:59:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.571 23:59:30 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.571 23:59:30 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:16.571 23:59:30 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:16.571 23:59:30 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:16.571 23:59:30 -- setup/devices.sh@50 -- # local mount_point= 00:05:16.571 23:59:30 -- setup/devices.sh@51 -- # local test_file= 00:05:16.571 23:59:30 -- setup/devices.sh@53 -- # local found=0 00:05:16.571 23:59:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.571 23:59:30 -- setup/devices.sh@59 -- # local pci status 00:05:16.571 23:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.571 23:59:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:16.571 23:59:30 -- setup/devices.sh@47 -- # setup output config 00:05:16.571 23:59:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.571 23:59:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.571 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.571 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.571 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.571 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.831 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.831 23:59:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:16.831 23:59:31 -- setup/devices.sh@63 -- # found=1 00:05:16.831 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.831 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.831 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.091 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.091 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.091 23:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.091 23:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.091 23:59:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.091 23:59:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.091 23:59:31 -- setup/devices.sh@68 -- # return 0 00:05:17.091 23:59:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.091 23:59:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.091 23:59:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:17.091 23:59:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:17.091 23:59:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:17.091 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.091 00:05:17.091 real 0m4.451s 00:05:17.091 user 0m0.890s 00:05:17.091 sys 0m1.212s 00:05:17.091 23:59:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.091 23:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.091 ************************************ 00:05:17.091 END TEST nvme_mount 00:05:17.091 ************************************ 00:05:17.091 23:59:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.091 23:59:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.091 23:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.091 23:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.352 ************************************ 00:05:17.352 START TEST dm_mount 00:05:17.352 ************************************ 00:05:17.352 23:59:31 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:17.352 23:59:31 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:17.352 23:59:31 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:17.352 23:59:31 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:17.352 23:59:31 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:17.352 23:59:31 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:17.352 23:59:31 -- setup/common.sh@40 -- # local part_no=2 00:05:17.352 23:59:31 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.352 23:59:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.352 23:59:31 -- setup/common.sh@44 -- # parts=() 00:05:17.352 23:59:31 -- setup/common.sh@44 -- # local parts 00:05:17.352 23:59:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.352 23:59:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.353 23:59:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.353 23:59:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.353 23:59:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.353 23:59:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.353 23:59:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.353 23:59:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.353 23:59:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.353 23:59:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:17.353 23:59:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:18.295 Creating new GPT entries in memory. 00:05:18.295 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.295 other utilities. 00:05:18.295 23:59:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.295 23:59:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.295 23:59:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.295 23:59:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.295 23:59:32 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:19.238 Creating new GPT entries in memory. 00:05:19.238 The operation has completed successfully. 00:05:19.238 23:59:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.238 23:59:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.238 23:59:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.238 23:59:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.238 23:59:33 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:20.180 The operation has completed successfully. 00:05:20.180 23:59:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.180 23:59:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.180 23:59:34 -- setup/common.sh@62 -- # wait 66797 00:05:20.441 23:59:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.441 23:59:34 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.441 23:59:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.441 23:59:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.441 23:59:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.441 23:59:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.441 23:59:34 -- setup/devices.sh@161 -- # break 00:05:20.441 23:59:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.441 23:59:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.441 23:59:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.441 23:59:34 -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.441 23:59:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:20.441 23:59:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:20.441 23:59:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.441 23:59:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:20.441 23:59:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.441 23:59:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.441 23:59:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.441 23:59:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.441 23:59:34 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.441 23:59:34 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:20.441 23:59:34 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:20.441 23:59:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.441 23:59:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.441 23:59:34 -- setup/devices.sh@53 -- # local found=0 00:05:20.441 23:59:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.441 23:59:34 -- setup/devices.sh@56 -- # : 00:05:20.441 23:59:34 -- setup/devices.sh@59 -- # local pci status 00:05:20.441 23:59:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.441 23:59:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:20.441 23:59:34 -- setup/devices.sh@47 -- # setup output config 00:05:20.441 23:59:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.441 23:59:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.441 23:59:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.441 23:59:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.708 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.708 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.708 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.708 23:59:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.708 23:59:35 -- setup/devices.sh@63 -- # found=1 00:05:20.708 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.708 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.709 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.972 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.972 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.972 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:20.972 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.972 23:59:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.972 23:59:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:20.972 23:59:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.972 23:59:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.972 23:59:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.972 23:59:35 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.972 23:59:35 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:20.972 23:59:35 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:20.972 23:59:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:20.972 23:59:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.972 23:59:35 -- setup/devices.sh@51 -- # local test_file= 00:05:20.972 23:59:35 -- setup/devices.sh@53 -- # local found=0 00:05:20.972 23:59:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.972 23:59:35 -- setup/devices.sh@59 -- # local pci status 00:05:20.972 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.972 23:59:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:20.972 23:59:35 -- setup/devices.sh@47 -- # setup output config 00:05:20.972 23:59:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.972 23:59:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.233 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.233 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.233 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.233 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.495 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.495 23:59:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:21.495 23:59:35 -- setup/devices.sh@63 -- # found=1 00:05:21.495 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.495 23:59:35 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.495 23:59:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.495 23:59:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.495 23:59:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.757 23:59:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.757 23:59:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.757 23:59:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.757 23:59:36 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.757 23:59:36 -- setup/devices.sh@68 -- # return 0 00:05:21.757 23:59:36 -- setup/devices.sh@187 -- # cleanup_dm 00:05:21.757 23:59:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.757 23:59:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.757 23:59:36 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:21.757 23:59:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:21.757 23:59:36 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:21.757 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.757 23:59:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:21.758 23:59:36 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:21.758 00:05:21.758 real 0m4.534s 00:05:21.758 user 0m0.632s 00:05:21.758 sys 0m0.802s 00:05:21.758 23:59:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.758 23:59:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.758 ************************************ 00:05:21.758 END TEST dm_mount 00:05:21.758 ************************************ 00:05:21.758 23:59:36 -- setup/devices.sh@1 -- # cleanup 00:05:21.758 23:59:36 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:21.758 23:59:36 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.758 23:59:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:21.758 23:59:36 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:21.758 23:59:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:21.758 23:59:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:22.019 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.019 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.019 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.019 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:22.019 23:59:36 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.019 23:59:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.019 23:59:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.019 23:59:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:22.019 23:59:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:22.019 23:59:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:22.019 23:59:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:22.019 00:05:22.019 real 0m10.711s 00:05:22.019 user 0m2.262s 00:05:22.019 sys 0m2.683s 00:05:22.019 23:59:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.019 23:59:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.019 ************************************ 00:05:22.019 END TEST devices 00:05:22.019 ************************************ 00:05:22.019 00:05:22.019 real 0m40.313s 00:05:22.019 user 0m7.794s 00:05:22.019 sys 0m10.825s 00:05:22.019 23:59:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.019 23:59:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.019 ************************************ 00:05:22.019 END TEST setup.sh 00:05:22.019 ************************************ 00:05:22.020 23:59:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.281 Hugepages 00:05:22.281 node hugesize free / total 00:05:22.281 node0 1048576kB 0 / 0 00:05:22.281 node0 2048kB 2048 / 2048 00:05:22.281 00:05:22.281 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.281 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.281 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:22.544 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:22.544 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:22.544 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.544 23:59:37 -- spdk/autotest.sh@128 -- # uname -s 00:05:22.544 23:59:37 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:22.544 23:59:37 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:22.544 23:59:37 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.487 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.487 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.487 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.487 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.487 23:59:38 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:24.872 23:59:39 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:24.872 23:59:39 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:24.872 23:59:39 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.872 23:59:39 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:24.872 23:59:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:24.872 23:59:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:24.872 23:59:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.872 23:59:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.872 23:59:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:24.872 23:59:39 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:24.872 23:59:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:24.872 23:59:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.133 Waiting for block devices as requested 00:05:25.133 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.133 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.133 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.394 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.680 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:30.680 23:59:44 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:30.680 23:59:44 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1552 -- # continue 00:05:30.680 23:59:44 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:30.680 23:59:44 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1552 -- # continue 00:05:30.680 23:59:44 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:30.680 23:59:44 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1552 -- # continue 00:05:30.680 23:59:44 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:05:30.680 23:59:44 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:30.680 23:59:44 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:30.680 23:59:44 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:30.680 23:59:44 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:30.680 23:59:44 -- common/autotest_common.sh@1552 -- # continue 00:05:30.680 23:59:44 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:30.680 23:59:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.681 23:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.681 23:59:44 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:30.681 23:59:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.681 23:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.681 23:59:44 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.507 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.507 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.507 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.507 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.507 23:59:46 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:31.507 23:59:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.507 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:31.507 23:59:46 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:31.507 23:59:46 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:31.507 23:59:46 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.507 23:59:46 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:31.507 23:59:46 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:31.507 23:59:46 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:31.507 23:59:46 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:31.507 23:59:46 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:31.507 23:59:46 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.507 23:59:46 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:31.507 23:59:46 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.507 23:59:46 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:31.507 23:59:46 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:31.507 23:59:46 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:31.766 23:59:46 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:31.766 23:59:46 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:31.766 23:59:46 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:31.766 23:59:46 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:31.766 23:59:46 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1588 -- # return 0 00:05:31.766 23:59:46 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:31.766 23:59:46 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:31.766 23:59:46 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:31.766 23:59:46 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:31.766 23:59:46 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:31.766 23:59:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.766 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:31.766 23:59:46 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.766 23:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.766 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:31.766 ************************************ 00:05:31.766 START TEST env 00:05:31.766 ************************************ 00:05:31.766 23:59:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.766 * Looking for test storage... 00:05:31.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:31.766 23:59:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.766 23:59:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.766 23:59:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.766 23:59:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.766 23:59:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.766 23:59:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.766 23:59:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.766 23:59:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.766 23:59:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.766 23:59:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.766 23:59:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.766 23:59:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.766 23:59:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.766 23:59:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.766 23:59:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.766 23:59:46 -- scripts/common.sh@344 -- # : 1 00:05:31.766 23:59:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.766 23:59:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.766 23:59:46 -- scripts/common.sh@364 -- # decimal 1 00:05:31.766 23:59:46 -- scripts/common.sh@352 -- # local d=1 00:05:31.766 23:59:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.766 23:59:46 -- scripts/common.sh@354 -- # echo 1 00:05:31.766 23:59:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.766 23:59:46 -- scripts/common.sh@365 -- # decimal 2 00:05:31.766 23:59:46 -- scripts/common.sh@352 -- # local d=2 00:05:31.766 23:59:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.766 23:59:46 -- scripts/common.sh@354 -- # echo 2 00:05:31.766 23:59:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.766 23:59:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.766 23:59:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.766 23:59:46 -- scripts/common.sh@367 -- # return 0 00:05:31.766 23:59:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.766 --rc genhtml_branch_coverage=1 00:05:31.766 --rc genhtml_function_coverage=1 00:05:31.766 --rc genhtml_legend=1 00:05:31.766 --rc geninfo_all_blocks=1 00:05:31.766 --rc geninfo_unexecuted_blocks=1 00:05:31.766 00:05:31.766 ' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.766 --rc genhtml_branch_coverage=1 00:05:31.766 --rc genhtml_function_coverage=1 00:05:31.766 --rc genhtml_legend=1 00:05:31.766 --rc geninfo_all_blocks=1 00:05:31.766 --rc geninfo_unexecuted_blocks=1 00:05:31.766 00:05:31.766 ' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.766 --rc genhtml_branch_coverage=1 00:05:31.766 --rc genhtml_function_coverage=1 00:05:31.766 --rc genhtml_legend=1 00:05:31.766 --rc geninfo_all_blocks=1 00:05:31.766 --rc geninfo_unexecuted_blocks=1 00:05:31.766 00:05:31.766 ' 00:05:31.766 23:59:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.766 --rc genhtml_branch_coverage=1 00:05:31.766 --rc genhtml_function_coverage=1 00:05:31.766 --rc genhtml_legend=1 00:05:31.766 --rc geninfo_all_blocks=1 00:05:31.766 --rc geninfo_unexecuted_blocks=1 00:05:31.766 00:05:31.766 ' 00:05:31.766 23:59:46 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.767 23:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.767 23:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.767 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:31.767 ************************************ 00:05:31.767 START TEST env_memory 00:05:31.767 ************************************ 00:05:31.767 23:59:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.767 00:05:31.767 00:05:31.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.767 http://cunit.sourceforge.net/ 00:05:31.767 00:05:31.767 00:05:31.767 Suite: memory 00:05:31.767 Test: alloc and free memory map ...[2024-11-27 23:59:46.324958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:31.767 passed 00:05:31.767 Test: mem map translation ...[2024-11-27 23:59:46.363594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:31.767 [2024-11-27 23:59:46.363636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:31.767 [2024-11-27 23:59:46.363694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:31.767 [2024-11-27 23:59:46.363708] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.025 passed 00:05:32.025 Test: mem map registration ...[2024-11-27 23:59:46.431782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:32.025 [2024-11-27 23:59:46.431820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:32.025 passed 00:05:32.025 Test: mem map adjacent registrations ...passed 00:05:32.025 00:05:32.025 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.025 suites 1 1 n/a 0 0 00:05:32.025 tests 4 4 4 0 0 00:05:32.026 asserts 152 152 152 0 n/a 00:05:32.026 00:05:32.026 Elapsed time = 0.233 seconds 00:05:32.026 00:05:32.026 real 0m0.267s 00:05:32.026 user 0m0.239s 00:05:32.026 sys 0m0.021s 00:05:32.026 23:59:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.026 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 ************************************ 00:05:32.026 END TEST env_memory 00:05:32.026 ************************************ 00:05:32.026 23:59:46 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.026 23:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.026 23:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.026 23:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 ************************************ 00:05:32.026 START TEST env_vtophys 00:05:32.026 ************************************ 00:05:32.026 23:59:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.026 EAL: lib.eal log level changed from notice to debug 00:05:32.026 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 1 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 2 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 3 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 4 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 5 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 6 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 7 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 8 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 9 as core 0 on socket 0 00:05:32.026 EAL: Maximum logical cores by configuration: 128 00:05:32.026 EAL: Detected CPU lcores: 10 00:05:32.026 EAL: Detected NUMA nodes: 1 00:05:32.026 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:32.026 EAL: Detected shared linkage of DPDK 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:32.026 EAL: Registered [vdev] bus. 00:05:32.026 EAL: bus.vdev log level changed from disabled to notice 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:32.026 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:32.026 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:32.026 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:32.026 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.026 EAL: No shared files mode enabled, IPC is disabled 00:05:32.026 EAL: Selected IOVA mode 'PA' 00:05:32.026 EAL: Probing VFIO support... 00:05:32.026 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.026 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:32.026 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.026 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.026 EAL: Setting up physically contiguous memory... 00:05:32.026 EAL: Setting maximum number of open files to 524288 00:05:32.026 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.026 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.026 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.026 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.026 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.026 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.026 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.026 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.026 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.026 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.026 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.026 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.026 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.026 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.026 EAL: Hugepages will be freed exactly as allocated. 00:05:32.026 EAL: No shared files mode enabled, IPC is disabled 00:05:32.026 EAL: No shared files mode enabled, IPC is disabled 00:05:32.284 EAL: TSC frequency is ~2600000 KHz 00:05:32.284 EAL: Main lcore 0 is ready (tid=7f0375af8a40;cpuset=[0]) 00:05:32.284 EAL: Trying to obtain current memory policy. 00:05:32.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.284 EAL: Restoring previous memory policy: 0 00:05:32.284 EAL: request: mp_malloc_sync 00:05:32.284 EAL: No shared files mode enabled, IPC is disabled 00:05:32.284 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.284 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.284 EAL: No shared files mode enabled, IPC is disabled 00:05:32.284 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.284 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.284 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:32.284 00:05:32.284 00:05:32.284 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.284 http://cunit.sourceforge.net/ 00:05:32.284 00:05:32.284 00:05:32.284 Suite: components_suite 00:05:32.543 Test: vtophys_malloc_test ...passed 00:05:32.543 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.543 EAL: No shared files mode enabled, IPC is disabled 00:05:32.543 EAL: Heap on socket 0 was shrunk by 130MB 00:05:32.543 EAL: Trying to obtain current memory policy. 00:05:32.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.543 EAL: Restoring previous memory policy: 4 00:05:32.543 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.543 EAL: request: mp_malloc_sync 00:05:32.802 EAL: No shared files mode enabled, IPC is disabled 00:05:32.802 EAL: Heap on socket 0 was expanded by 258MB 00:05:32.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.802 EAL: request: mp_malloc_sync 00:05:32.802 EAL: No shared files mode enabled, IPC is disabled 00:05:32.802 EAL: Heap on socket 0 was shrunk by 258MB 00:05:32.802 EAL: Trying to obtain current memory policy. 00:05:32.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.802 EAL: Restoring previous memory policy: 4 00:05:32.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.802 EAL: request: mp_malloc_sync 00:05:32.802 EAL: No shared files mode enabled, IPC is disabled 00:05:32.802 EAL: Heap on socket 0 was expanded by 514MB 00:05:32.802 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.802 EAL: request: mp_malloc_sync 00:05:32.802 EAL: No shared files mode enabled, IPC is disabled 00:05:32.802 EAL: Heap on socket 0 was shrunk by 514MB 00:05:32.802 EAL: Trying to obtain current memory policy. 00:05:32.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.061 EAL: Restoring previous memory policy: 4 00:05:33.061 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.061 EAL: request: mp_malloc_sync 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.061 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.320 EAL: request: mp_malloc_sync 00:05:33.320 EAL: No shared files mode enabled, IPC is disabled 00:05:33.320 passed 00:05:33.320 00:05:33.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.320 suites 1 1 n/a 0 0 00:05:33.320 tests 2 2 2 0 0 00:05:33.320 asserts 5274 5274 5274 0 n/a 00:05:33.320 00:05:33.320 Elapsed time = 0.963 seconds 00:05:33.320 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:33.320 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.320 EAL: request: mp_malloc_sync 00:05:33.320 EAL: No shared files mode enabled, IPC is disabled 00:05:33.320 EAL: Heap on socket 0 was shrunk by 2MB 00:05:33.320 EAL: No shared files mode enabled, IPC is disabled 00:05:33.320 EAL: No shared files mode enabled, IPC is disabled 00:05:33.320 EAL: No shared files mode enabled, IPC is disabled 00:05:33.320 00:05:33.320 real 0m1.184s 00:05:33.320 user 0m0.492s 00:05:33.320 sys 0m0.554s 00:05:33.320 23:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.320 ************************************ 00:05:33.320 END TEST env_vtophys 00:05:33.320 ************************************ 00:05:33.320 23:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 23:59:47 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.320 23:59:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.320 23:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.320 23:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST env_pci 00:05:33.320 ************************************ 00:05:33.320 23:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.320 00:05:33.320 00:05:33.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.320 http://cunit.sourceforge.net/ 00:05:33.320 00:05:33.320 00:05:33.320 Suite: pci 00:05:33.320 Test: pci_hook ...[2024-11-27 23:59:47.818764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68458 has claimed it 00:05:33.320 passed 00:05:33.320 00:05:33.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.320 suites 1 1 n/a 0 0 00:05:33.320 tests 1 1 1 0 0 00:05:33.320 asserts 25 25 25 0 n/a 00:05:33.320 00:05:33.320 Elapsed time = 0.004 seconds 00:05:33.320 EAL: Cannot find device (10000:00:01.0) 00:05:33.320 EAL: Failed to attach device on primary process 00:05:33.320 00:05:33.320 real 0m0.052s 00:05:33.320 user 0m0.022s 00:05:33.320 sys 0m0.029s 00:05:33.320 23:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.320 23:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST env_pci 00:05:33.320 ************************************ 00:05:33.320 23:59:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.320 23:59:47 -- env/env.sh@15 -- # uname 00:05:33.320 23:59:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.320 23:59:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.320 23:59:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.320 23:59:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:33.320 23:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.320 23:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST env_dpdk_post_init 00:05:33.320 ************************************ 00:05:33.320 23:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.579 EAL: Detected CPU lcores: 10 00:05:33.579 EAL: Detected NUMA nodes: 1 00:05:33.579 EAL: Detected shared linkage of DPDK 00:05:33.579 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.579 EAL: Selected IOVA mode 'PA' 00:05:33.579 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.579 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:33.579 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:33.579 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:33.579 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:33.579 Starting DPDK initialization... 00:05:33.579 Starting SPDK post initialization... 00:05:33.579 SPDK NVMe probe 00:05:33.579 Attaching to 0000:00:06.0 00:05:33.579 Attaching to 0000:00:07.0 00:05:33.579 Attaching to 0000:00:08.0 00:05:33.579 Attaching to 0000:00:09.0 00:05:33.579 Attached to 0000:00:09.0 00:05:33.579 Attached to 0000:00:06.0 00:05:33.579 Attached to 0000:00:07.0 00:05:33.579 Attached to 0000:00:08.0 00:05:33.579 Cleaning up... 00:05:33.579 00:05:33.579 real 0m0.221s 00:05:33.579 user 0m0.053s 00:05:33.579 sys 0m0.069s 00:05:33.579 ************************************ 00:05:33.579 END TEST env_dpdk_post_init 00:05:33.579 ************************************ 00:05:33.579 23:59:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.579 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.579 23:59:48 -- env/env.sh@26 -- # uname 00:05:33.579 23:59:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.579 23:59:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.579 23:59:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.579 23:59:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.579 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.579 ************************************ 00:05:33.579 START TEST env_mem_callbacks 00:05:33.579 ************************************ 00:05:33.579 23:59:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.837 EAL: Detected CPU lcores: 10 00:05:33.837 EAL: Detected NUMA nodes: 1 00:05:33.837 EAL: Detected shared linkage of DPDK 00:05:33.837 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.837 EAL: Selected IOVA mode 'PA' 00:05:33.837 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.837 00:05:33.837 00:05:33.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.837 http://cunit.sourceforge.net/ 00:05:33.837 00:05:33.837 00:05:33.837 Suite: memory 00:05:33.837 Test: test ... 00:05:33.837 register 0x200000200000 2097152 00:05:33.837 malloc 3145728 00:05:33.837 register 0x200000400000 4194304 00:05:33.837 buf 0x200000500000 len 3145728 PASSED 00:05:33.837 malloc 64 00:05:33.837 buf 0x2000004fff40 len 64 PASSED 00:05:33.837 malloc 4194304 00:05:33.837 register 0x200000800000 6291456 00:05:33.837 buf 0x200000a00000 len 4194304 PASSED 00:05:33.837 free 0x200000500000 3145728 00:05:33.837 free 0x2000004fff40 64 00:05:33.837 unregister 0x200000400000 4194304 PASSED 00:05:33.837 free 0x200000a00000 4194304 00:05:33.837 unregister 0x200000800000 6291456 PASSED 00:05:33.837 malloc 8388608 00:05:33.837 register 0x200000400000 10485760 00:05:33.837 buf 0x200000600000 len 8388608 PASSED 00:05:33.837 free 0x200000600000 8388608 00:05:33.837 unregister 0x200000400000 10485760 PASSED 00:05:33.837 passed 00:05:33.837 00:05:33.837 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.837 suites 1 1 n/a 0 0 00:05:33.837 tests 1 1 1 0 0 00:05:33.837 asserts 15 15 15 0 n/a 00:05:33.837 00:05:33.837 Elapsed time = 0.008 seconds 00:05:33.837 00:05:33.837 real 0m0.169s 00:05:33.837 user 0m0.025s 00:05:33.837 sys 0m0.041s 00:05:33.837 23:59:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.837 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.837 ************************************ 00:05:33.837 END TEST env_mem_callbacks 00:05:33.837 ************************************ 00:05:33.837 00:05:33.837 real 0m2.251s 00:05:33.837 user 0m0.979s 00:05:33.837 sys 0m0.899s 00:05:33.837 23:59:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.837 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.837 ************************************ 00:05:33.837 END TEST env 00:05:33.837 ************************************ 00:05:33.837 23:59:48 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.837 23:59:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.837 23:59:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.837 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.096 ************************************ 00:05:34.096 START TEST rpc 00:05:34.096 ************************************ 00:05:34.096 23:59:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.096 * Looking for test storage... 00:05:34.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.096 23:59:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.096 23:59:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.096 23:59:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.096 23:59:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.096 23:59:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.096 23:59:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.096 23:59:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.096 23:59:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.096 23:59:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.096 23:59:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.096 23:59:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.096 23:59:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.096 23:59:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.096 23:59:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.096 23:59:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.096 23:59:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.096 23:59:48 -- scripts/common.sh@344 -- # : 1 00:05:34.096 23:59:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.096 23:59:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.096 23:59:48 -- scripts/common.sh@364 -- # decimal 1 00:05:34.096 23:59:48 -- scripts/common.sh@352 -- # local d=1 00:05:34.096 23:59:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.096 23:59:48 -- scripts/common.sh@354 -- # echo 1 00:05:34.096 23:59:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.096 23:59:48 -- scripts/common.sh@365 -- # decimal 2 00:05:34.096 23:59:48 -- scripts/common.sh@352 -- # local d=2 00:05:34.096 23:59:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.096 23:59:48 -- scripts/common.sh@354 -- # echo 2 00:05:34.096 23:59:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.096 23:59:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.096 23:59:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.096 23:59:48 -- scripts/common.sh@367 -- # return 0 00:05:34.096 23:59:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.096 23:59:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.096 --rc genhtml_branch_coverage=1 00:05:34.096 --rc genhtml_function_coverage=1 00:05:34.096 --rc genhtml_legend=1 00:05:34.096 --rc geninfo_all_blocks=1 00:05:34.096 --rc geninfo_unexecuted_blocks=1 00:05:34.096 00:05:34.096 ' 00:05:34.096 23:59:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.096 --rc genhtml_branch_coverage=1 00:05:34.096 --rc genhtml_function_coverage=1 00:05:34.096 --rc genhtml_legend=1 00:05:34.096 --rc geninfo_all_blocks=1 00:05:34.096 --rc geninfo_unexecuted_blocks=1 00:05:34.096 00:05:34.096 ' 00:05:34.096 23:59:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.096 --rc genhtml_branch_coverage=1 00:05:34.096 --rc genhtml_function_coverage=1 00:05:34.096 --rc genhtml_legend=1 00:05:34.096 --rc geninfo_all_blocks=1 00:05:34.096 --rc geninfo_unexecuted_blocks=1 00:05:34.096 00:05:34.096 ' 00:05:34.096 23:59:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.096 --rc genhtml_branch_coverage=1 00:05:34.096 --rc genhtml_function_coverage=1 00:05:34.096 --rc genhtml_legend=1 00:05:34.096 --rc geninfo_all_blocks=1 00:05:34.096 --rc geninfo_unexecuted_blocks=1 00:05:34.096 00:05:34.096 ' 00:05:34.096 23:59:48 -- rpc/rpc.sh@65 -- # spdk_pid=68584 00:05:34.096 23:59:48 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.096 23:59:48 -- rpc/rpc.sh@67 -- # waitforlisten 68584 00:05:34.096 23:59:48 -- common/autotest_common.sh@829 -- # '[' -z 68584 ']' 00:05:34.096 23:59:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.096 23:59:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.096 23:59:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.096 23:59:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.096 23:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.096 23:59:48 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.096 [2024-11-27 23:59:48.648248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.096 [2024-11-27 23:59:48.648546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68584 ] 00:05:34.354 [2024-11-27 23:59:48.794902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.354 [2024-11-27 23:59:48.826544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.354 [2024-11-27 23:59:48.826726] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.354 [2024-11-27 23:59:48.826741] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68584' to capture a snapshot of events at runtime. 00:05:34.354 [2024-11-27 23:59:48.826750] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68584 for offline analysis/debug. 00:05:34.354 [2024-11-27 23:59:48.826781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.921 23:59:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.921 23:59:49 -- common/autotest_common.sh@862 -- # return 0 00:05:34.921 23:59:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.921 23:59:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.921 23:59:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.921 23:59:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.921 23:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.921 23:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.921 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.921 ************************************ 00:05:34.921 START TEST rpc_integrity 00:05:34.921 ************************************ 00:05:34.921 23:59:49 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:34.921 23:59:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.921 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.921 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.921 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.921 23:59:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.921 23:59:49 -- rpc/rpc.sh@13 -- # jq length 00:05:35.179 23:59:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.179 23:59:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.179 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.179 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.179 23:59:49 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.179 23:59:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.179 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.179 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.179 23:59:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.179 { 00:05:35.179 "name": "Malloc0", 00:05:35.179 "aliases": [ 00:05:35.179 "b72e3144-0b5b-487e-9326-065b8d606fa3" 00:05:35.179 ], 00:05:35.179 "product_name": "Malloc disk", 00:05:35.179 "block_size": 512, 00:05:35.179 "num_blocks": 16384, 00:05:35.179 "uuid": "b72e3144-0b5b-487e-9326-065b8d606fa3", 00:05:35.179 "assigned_rate_limits": { 00:05:35.179 "rw_ios_per_sec": 0, 00:05:35.179 "rw_mbytes_per_sec": 0, 00:05:35.179 "r_mbytes_per_sec": 0, 00:05:35.179 "w_mbytes_per_sec": 0 00:05:35.179 }, 00:05:35.179 "claimed": false, 00:05:35.179 "zoned": false, 00:05:35.179 "supported_io_types": { 00:05:35.179 "read": true, 00:05:35.179 "write": true, 00:05:35.179 "unmap": true, 00:05:35.179 "write_zeroes": true, 00:05:35.179 "flush": true, 00:05:35.179 "reset": true, 00:05:35.179 "compare": false, 00:05:35.179 "compare_and_write": false, 00:05:35.179 "abort": true, 00:05:35.179 "nvme_admin": false, 00:05:35.179 "nvme_io": false 00:05:35.179 }, 00:05:35.179 "memory_domains": [ 00:05:35.179 { 00:05:35.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.179 "dma_device_type": 2 00:05:35.179 } 00:05:35.179 ], 00:05:35.179 "driver_specific": {} 00:05:35.179 } 00:05:35.179 ]' 00:05:35.179 23:59:49 -- rpc/rpc.sh@17 -- # jq length 00:05:35.179 23:59:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.179 23:59:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.179 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.179 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 [2024-11-27 23:59:49.591386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.179 [2024-11-27 23:59:49.591443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.179 [2024-11-27 23:59:49.591466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:35.179 [2024-11-27 23:59:49.591480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.179 [2024-11-27 23:59:49.593641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.179 [2024-11-27 23:59:49.593770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.179 Passthru0 00:05:35.179 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.179 23:59:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.179 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.179 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.179 23:59:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.179 { 00:05:35.179 "name": "Malloc0", 00:05:35.179 "aliases": [ 00:05:35.179 "b72e3144-0b5b-487e-9326-065b8d606fa3" 00:05:35.179 ], 00:05:35.179 "product_name": "Malloc disk", 00:05:35.179 "block_size": 512, 00:05:35.179 "num_blocks": 16384, 00:05:35.179 "uuid": "b72e3144-0b5b-487e-9326-065b8d606fa3", 00:05:35.179 "assigned_rate_limits": { 00:05:35.179 "rw_ios_per_sec": 0, 00:05:35.179 "rw_mbytes_per_sec": 0, 00:05:35.180 "r_mbytes_per_sec": 0, 00:05:35.180 "w_mbytes_per_sec": 0 00:05:35.180 }, 00:05:35.180 "claimed": true, 00:05:35.180 "claim_type": "exclusive_write", 00:05:35.180 "zoned": false, 00:05:35.180 "supported_io_types": { 00:05:35.180 "read": true, 00:05:35.180 "write": true, 00:05:35.180 "unmap": true, 00:05:35.180 "write_zeroes": true, 00:05:35.180 "flush": true, 00:05:35.180 "reset": true, 00:05:35.180 "compare": false, 00:05:35.180 "compare_and_write": false, 00:05:35.180 "abort": true, 00:05:35.180 "nvme_admin": false, 00:05:35.180 "nvme_io": false 00:05:35.180 }, 00:05:35.180 "memory_domains": [ 00:05:35.180 { 00:05:35.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.180 "dma_device_type": 2 00:05:35.180 } 00:05:35.180 ], 00:05:35.180 "driver_specific": {} 00:05:35.180 }, 00:05:35.180 { 00:05:35.180 "name": "Passthru0", 00:05:35.180 "aliases": [ 00:05:35.180 "bf2f24e8-0e48-5590-8a99-e81e15ad364f" 00:05:35.180 ], 00:05:35.180 "product_name": "passthru", 00:05:35.180 "block_size": 512, 00:05:35.180 "num_blocks": 16384, 00:05:35.180 "uuid": "bf2f24e8-0e48-5590-8a99-e81e15ad364f", 00:05:35.180 "assigned_rate_limits": { 00:05:35.180 "rw_ios_per_sec": 0, 00:05:35.180 "rw_mbytes_per_sec": 0, 00:05:35.180 "r_mbytes_per_sec": 0, 00:05:35.180 "w_mbytes_per_sec": 0 00:05:35.180 }, 00:05:35.180 "claimed": false, 00:05:35.180 "zoned": false, 00:05:35.180 "supported_io_types": { 00:05:35.180 "read": true, 00:05:35.180 "write": true, 00:05:35.180 "unmap": true, 00:05:35.180 "write_zeroes": true, 00:05:35.180 "flush": true, 00:05:35.180 "reset": true, 00:05:35.180 "compare": false, 00:05:35.180 "compare_and_write": false, 00:05:35.180 "abort": true, 00:05:35.180 "nvme_admin": false, 00:05:35.180 "nvme_io": false 00:05:35.180 }, 00:05:35.180 "memory_domains": [ 00:05:35.180 { 00:05:35.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.180 "dma_device_type": 2 00:05:35.180 } 00:05:35.180 ], 00:05:35.180 "driver_specific": { 00:05:35.180 "passthru": { 00:05:35.180 "name": "Passthru0", 00:05:35.180 "base_bdev_name": "Malloc0" 00:05:35.180 } 00:05:35.180 } 00:05:35.180 } 00:05:35.180 ]' 00:05:35.180 23:59:49 -- rpc/rpc.sh@21 -- # jq length 00:05:35.180 23:59:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.180 23:59:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.180 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.180 23:59:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.180 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.180 23:59:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.180 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.180 23:59:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.180 23:59:49 -- rpc/rpc.sh@26 -- # jq length 00:05:35.180 ************************************ 00:05:35.180 END TEST rpc_integrity 00:05:35.180 ************************************ 00:05:35.180 23:59:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.180 00:05:35.180 real 0m0.221s 00:05:35.180 user 0m0.123s 00:05:35.180 sys 0m0.031s 00:05:35.180 23:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 23:59:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:35.180 23:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.180 23:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 ************************************ 00:05:35.180 START TEST rpc_plugins 00:05:35.180 ************************************ 00:05:35.180 23:59:49 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:35.180 23:59:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:35.180 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.180 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.180 23:59:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:35.180 23:59:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:35.180 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.180 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.437 23:59:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:35.437 { 00:05:35.437 "name": "Malloc1", 00:05:35.437 "aliases": [ 00:05:35.437 "552293bd-1b51-403f-8778-1c776c15a466" 00:05:35.437 ], 00:05:35.437 "product_name": "Malloc disk", 00:05:35.437 "block_size": 4096, 00:05:35.437 "num_blocks": 256, 00:05:35.437 "uuid": "552293bd-1b51-403f-8778-1c776c15a466", 00:05:35.437 "assigned_rate_limits": { 00:05:35.437 "rw_ios_per_sec": 0, 00:05:35.437 "rw_mbytes_per_sec": 0, 00:05:35.437 "r_mbytes_per_sec": 0, 00:05:35.437 "w_mbytes_per_sec": 0 00:05:35.437 }, 00:05:35.437 "claimed": false, 00:05:35.437 "zoned": false, 00:05:35.437 "supported_io_types": { 00:05:35.437 "read": true, 00:05:35.437 "write": true, 00:05:35.437 "unmap": true, 00:05:35.437 "write_zeroes": true, 00:05:35.437 "flush": true, 00:05:35.437 "reset": true, 00:05:35.437 "compare": false, 00:05:35.437 "compare_and_write": false, 00:05:35.437 "abort": true, 00:05:35.437 "nvme_admin": false, 00:05:35.437 "nvme_io": false 00:05:35.437 }, 00:05:35.437 "memory_domains": [ 00:05:35.437 { 00:05:35.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.437 "dma_device_type": 2 00:05:35.437 } 00:05:35.437 ], 00:05:35.437 "driver_specific": {} 00:05:35.437 } 00:05:35.437 ]' 00:05:35.437 23:59:49 -- rpc/rpc.sh@32 -- # jq length 00:05:35.437 23:59:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:35.437 23:59:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:35.437 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.437 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.437 23:59:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:35.437 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.437 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.437 23:59:49 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:35.437 23:59:49 -- rpc/rpc.sh@36 -- # jq length 00:05:35.437 ************************************ 00:05:35.437 END TEST rpc_plugins 00:05:35.437 ************************************ 00:05:35.437 23:59:49 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:35.437 00:05:35.437 real 0m0.115s 00:05:35.437 user 0m0.063s 00:05:35.437 sys 0m0.015s 00:05:35.437 23:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.437 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 23:59:49 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.437 23:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.437 23:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.437 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 ************************************ 00:05:35.437 START TEST rpc_trace_cmd_test 00:05:35.437 ************************************ 00:05:35.437 23:59:49 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:35.437 23:59:49 -- rpc/rpc.sh@40 -- # local info 00:05:35.437 23:59:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.437 23:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.437 23:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.437 23:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.437 23:59:49 -- rpc/rpc.sh@42 -- # info='{ 00:05:35.437 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68584", 00:05:35.437 "tpoint_group_mask": "0x8", 00:05:35.437 "iscsi_conn": { 00:05:35.437 "mask": "0x2", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "scsi": { 00:05:35.437 "mask": "0x4", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "bdev": { 00:05:35.437 "mask": "0x8", 00:05:35.437 "tpoint_mask": "0xffffffffffffffff" 00:05:35.437 }, 00:05:35.437 "nvmf_rdma": { 00:05:35.437 "mask": "0x10", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "nvmf_tcp": { 00:05:35.437 "mask": "0x20", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "ftl": { 00:05:35.437 "mask": "0x40", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "blobfs": { 00:05:35.437 "mask": "0x80", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "dsa": { 00:05:35.437 "mask": "0x200", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "thread": { 00:05:35.437 "mask": "0x400", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "nvme_pcie": { 00:05:35.437 "mask": "0x800", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "iaa": { 00:05:35.437 "mask": "0x1000", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "nvme_tcp": { 00:05:35.437 "mask": "0x2000", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 }, 00:05:35.437 "bdev_nvme": { 00:05:35.437 "mask": "0x4000", 00:05:35.437 "tpoint_mask": "0x0" 00:05:35.437 } 00:05:35.437 }' 00:05:35.437 23:59:49 -- rpc/rpc.sh@43 -- # jq length 00:05:35.437 23:59:49 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:35.437 23:59:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.437 23:59:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.437 23:59:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.695 23:59:50 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.695 23:59:50 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.695 23:59:50 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.695 23:59:50 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.695 ************************************ 00:05:35.695 END TEST rpc_trace_cmd_test 00:05:35.695 ************************************ 00:05:35.695 23:59:50 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.695 00:05:35.695 real 0m0.165s 00:05:35.695 user 0m0.137s 00:05:35.695 sys 0m0.018s 00:05:35.695 23:59:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 23:59:50 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.695 23:59:50 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.695 23:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.695 23:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 ************************************ 00:05:35.695 START TEST rpc_daemon_integrity 00:05:35.695 ************************************ 00:05:35.695 23:59:50 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:35.695 23:59:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.695 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.695 23:59:50 -- rpc/rpc.sh@13 -- # jq length 00:05:35.695 23:59:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.695 23:59:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.695 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.695 23:59:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.695 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.695 { 00:05:35.695 "name": "Malloc2", 00:05:35.695 "aliases": [ 00:05:35.695 "be32a84d-d524-4dfe-bc71-6c586eef8850" 00:05:35.695 ], 00:05:35.695 "product_name": "Malloc disk", 00:05:35.695 "block_size": 512, 00:05:35.695 "num_blocks": 16384, 00:05:35.695 "uuid": "be32a84d-d524-4dfe-bc71-6c586eef8850", 00:05:35.695 "assigned_rate_limits": { 00:05:35.695 "rw_ios_per_sec": 0, 00:05:35.695 "rw_mbytes_per_sec": 0, 00:05:35.695 "r_mbytes_per_sec": 0, 00:05:35.695 "w_mbytes_per_sec": 0 00:05:35.695 }, 00:05:35.695 "claimed": false, 00:05:35.695 "zoned": false, 00:05:35.695 "supported_io_types": { 00:05:35.695 "read": true, 00:05:35.695 "write": true, 00:05:35.695 "unmap": true, 00:05:35.695 "write_zeroes": true, 00:05:35.695 "flush": true, 00:05:35.695 "reset": true, 00:05:35.695 "compare": false, 00:05:35.695 "compare_and_write": false, 00:05:35.695 "abort": true, 00:05:35.695 "nvme_admin": false, 00:05:35.695 "nvme_io": false 00:05:35.695 }, 00:05:35.695 "memory_domains": [ 00:05:35.695 { 00:05:35.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.695 "dma_device_type": 2 00:05:35.695 } 00:05:35.695 ], 00:05:35.695 "driver_specific": {} 00:05:35.695 } 00:05:35.695 ]' 00:05:35.695 23:59:50 -- rpc/rpc.sh@17 -- # jq length 00:05:35.695 23:59:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.695 23:59:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.695 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 [2024-11-27 23:59:50.263791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.695 [2024-11-27 23:59:50.263850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.695 [2024-11-27 23:59:50.263870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:35.695 [2024-11-27 23:59:50.263881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.695 [2024-11-27 23:59:50.266064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.695 [2024-11-27 23:59:50.266104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.695 Passthru0 00:05:35.695 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.695 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.695 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.695 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.695 23:59:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.695 { 00:05:35.696 "name": "Malloc2", 00:05:35.696 "aliases": [ 00:05:35.696 "be32a84d-d524-4dfe-bc71-6c586eef8850" 00:05:35.696 ], 00:05:35.696 "product_name": "Malloc disk", 00:05:35.696 "block_size": 512, 00:05:35.696 "num_blocks": 16384, 00:05:35.696 "uuid": "be32a84d-d524-4dfe-bc71-6c586eef8850", 00:05:35.696 "assigned_rate_limits": { 00:05:35.696 "rw_ios_per_sec": 0, 00:05:35.696 "rw_mbytes_per_sec": 0, 00:05:35.696 "r_mbytes_per_sec": 0, 00:05:35.696 "w_mbytes_per_sec": 0 00:05:35.696 }, 00:05:35.696 "claimed": true, 00:05:35.696 "claim_type": "exclusive_write", 00:05:35.696 "zoned": false, 00:05:35.696 "supported_io_types": { 00:05:35.696 "read": true, 00:05:35.696 "write": true, 00:05:35.696 "unmap": true, 00:05:35.696 "write_zeroes": true, 00:05:35.696 "flush": true, 00:05:35.696 "reset": true, 00:05:35.696 "compare": false, 00:05:35.696 "compare_and_write": false, 00:05:35.696 "abort": true, 00:05:35.696 "nvme_admin": false, 00:05:35.696 "nvme_io": false 00:05:35.696 }, 00:05:35.696 "memory_domains": [ 00:05:35.696 { 00:05:35.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.696 "dma_device_type": 2 00:05:35.696 } 00:05:35.696 ], 00:05:35.696 "driver_specific": {} 00:05:35.696 }, 00:05:35.696 { 00:05:35.696 "name": "Passthru0", 00:05:35.696 "aliases": [ 00:05:35.696 "38c9ef22-ec3d-59af-b0c4-4e770b18e13d" 00:05:35.696 ], 00:05:35.696 "product_name": "passthru", 00:05:35.696 "block_size": 512, 00:05:35.696 "num_blocks": 16384, 00:05:35.696 "uuid": "38c9ef22-ec3d-59af-b0c4-4e770b18e13d", 00:05:35.696 "assigned_rate_limits": { 00:05:35.696 "rw_ios_per_sec": 0, 00:05:35.696 "rw_mbytes_per_sec": 0, 00:05:35.696 "r_mbytes_per_sec": 0, 00:05:35.696 "w_mbytes_per_sec": 0 00:05:35.696 }, 00:05:35.696 "claimed": false, 00:05:35.696 "zoned": false, 00:05:35.696 "supported_io_types": { 00:05:35.696 "read": true, 00:05:35.696 "write": true, 00:05:35.696 "unmap": true, 00:05:35.696 "write_zeroes": true, 00:05:35.696 "flush": true, 00:05:35.696 "reset": true, 00:05:35.696 "compare": false, 00:05:35.696 "compare_and_write": false, 00:05:35.696 "abort": true, 00:05:35.696 "nvme_admin": false, 00:05:35.696 "nvme_io": false 00:05:35.696 }, 00:05:35.696 "memory_domains": [ 00:05:35.696 { 00:05:35.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.696 "dma_device_type": 2 00:05:35.696 } 00:05:35.696 ], 00:05:35.696 "driver_specific": { 00:05:35.696 "passthru": { 00:05:35.696 "name": "Passthru0", 00:05:35.696 "base_bdev_name": "Malloc2" 00:05:35.696 } 00:05:35.696 } 00:05:35.696 } 00:05:35.696 ]' 00:05:35.696 23:59:50 -- rpc/rpc.sh@21 -- # jq length 00:05:35.953 23:59:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.953 23:59:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.953 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.953 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.953 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.953 23:59:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.953 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.953 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.953 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.953 23:59:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.953 23:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.953 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.953 23:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.953 23:59:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.953 23:59:50 -- rpc/rpc.sh@26 -- # jq length 00:05:35.953 ************************************ 00:05:35.953 END TEST rpc_daemon_integrity 00:05:35.953 ************************************ 00:05:35.953 23:59:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.953 00:05:35.953 real 0m0.232s 00:05:35.953 user 0m0.124s 00:05:35.953 sys 0m0.036s 00:05:35.953 23:59:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.953 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.953 23:59:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.953 23:59:50 -- rpc/rpc.sh@84 -- # killprocess 68584 00:05:35.953 23:59:50 -- common/autotest_common.sh@936 -- # '[' -z 68584 ']' 00:05:35.953 23:59:50 -- common/autotest_common.sh@940 -- # kill -0 68584 00:05:35.953 23:59:50 -- common/autotest_common.sh@941 -- # uname 00:05:35.953 23:59:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.953 23:59:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68584 00:05:35.953 killing process with pid 68584 00:05:35.953 23:59:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.953 23:59:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.953 23:59:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68584' 00:05:35.953 23:59:50 -- common/autotest_common.sh@955 -- # kill 68584 00:05:35.953 23:59:50 -- common/autotest_common.sh@960 -- # wait 68584 00:05:36.211 00:05:36.211 real 0m2.251s 00:05:36.211 user 0m2.660s 00:05:36.211 sys 0m0.568s 00:05:36.211 23:59:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.211 ************************************ 00:05:36.211 END TEST rpc 00:05:36.211 ************************************ 00:05:36.211 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.211 23:59:50 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:36.211 23:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.211 23:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.211 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.211 ************************************ 00:05:36.211 START TEST rpc_client 00:05:36.211 ************************************ 00:05:36.211 23:59:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:36.211 * Looking for test storage... 00:05:36.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:36.470 23:59:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.470 23:59:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.470 23:59:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.470 23:59:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.470 23:59:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.470 23:59:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.470 23:59:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.470 23:59:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.470 23:59:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.470 23:59:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.470 23:59:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.470 23:59:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.470 23:59:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.470 23:59:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.470 23:59:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.470 23:59:50 -- scripts/common.sh@344 -- # : 1 00:05:36.470 23:59:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.470 23:59:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.470 23:59:50 -- scripts/common.sh@364 -- # decimal 1 00:05:36.470 23:59:50 -- scripts/common.sh@352 -- # local d=1 00:05:36.470 23:59:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.470 23:59:50 -- scripts/common.sh@354 -- # echo 1 00:05:36.470 23:59:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.470 23:59:50 -- scripts/common.sh@365 -- # decimal 2 00:05:36.470 23:59:50 -- scripts/common.sh@352 -- # local d=2 00:05:36.470 23:59:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.470 23:59:50 -- scripts/common.sh@354 -- # echo 2 00:05:36.470 23:59:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.470 23:59:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.470 23:59:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.470 23:59:50 -- scripts/common.sh@367 -- # return 0 00:05:36.470 23:59:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.470 --rc genhtml_branch_coverage=1 00:05:36.470 --rc genhtml_function_coverage=1 00:05:36.470 --rc genhtml_legend=1 00:05:36.470 --rc geninfo_all_blocks=1 00:05:36.470 --rc geninfo_unexecuted_blocks=1 00:05:36.470 00:05:36.470 ' 00:05:36.470 23:59:50 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:36.470 OK 00:05:36.470 23:59:50 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.470 00:05:36.470 real 0m0.181s 00:05:36.470 user 0m0.097s 00:05:36.470 sys 0m0.087s 00:05:36.470 23:59:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.470 ************************************ 00:05:36.470 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.470 END TEST rpc_client 00:05:36.470 ************************************ 00:05:36.470 23:59:50 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:36.470 23:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.470 23:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.470 23:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.470 ************************************ 00:05:36.470 START TEST json_config 00:05:36.470 ************************************ 00:05:36.470 23:59:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:36.470 23:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.470 23:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.470 23:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.730 23:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.730 23:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.730 23:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.730 23:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.730 23:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.730 23:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.730 23:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.730 23:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.730 23:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.730 23:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.730 23:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.730 23:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.730 23:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.730 23:59:51 -- scripts/common.sh@344 -- # : 1 00:05:36.730 23:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.730 23:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.730 23:59:51 -- scripts/common.sh@364 -- # decimal 1 00:05:36.730 23:59:51 -- scripts/common.sh@352 -- # local d=1 00:05:36.730 23:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.730 23:59:51 -- scripts/common.sh@354 -- # echo 1 00:05:36.730 23:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.730 23:59:51 -- scripts/common.sh@365 -- # decimal 2 00:05:36.730 23:59:51 -- scripts/common.sh@352 -- # local d=2 00:05:36.730 23:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.730 23:59:51 -- scripts/common.sh@354 -- # echo 2 00:05:36.730 23:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.730 23:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.730 23:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.730 23:59:51 -- scripts/common.sh@367 -- # return 0 00:05:36.730 23:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.730 --rc genhtml_branch_coverage=1 00:05:36.730 --rc genhtml_function_coverage=1 00:05:36.730 --rc genhtml_legend=1 00:05:36.730 --rc geninfo_all_blocks=1 00:05:36.730 --rc geninfo_unexecuted_blocks=1 00:05:36.730 00:05:36.730 ' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.730 --rc genhtml_branch_coverage=1 00:05:36.730 --rc genhtml_function_coverage=1 00:05:36.730 --rc genhtml_legend=1 00:05:36.730 --rc geninfo_all_blocks=1 00:05:36.730 --rc geninfo_unexecuted_blocks=1 00:05:36.730 00:05:36.730 ' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.730 --rc genhtml_branch_coverage=1 00:05:36.730 --rc genhtml_function_coverage=1 00:05:36.730 --rc genhtml_legend=1 00:05:36.730 --rc geninfo_all_blocks=1 00:05:36.730 --rc geninfo_unexecuted_blocks=1 00:05:36.730 00:05:36.730 ' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.730 --rc genhtml_branch_coverage=1 00:05:36.730 --rc genhtml_function_coverage=1 00:05:36.730 --rc genhtml_legend=1 00:05:36.730 --rc geninfo_all_blocks=1 00:05:36.730 --rc geninfo_unexecuted_blocks=1 00:05:36.730 00:05:36.730 ' 00:05:36.730 23:59:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.730 23:59:51 -- nvmf/common.sh@7 -- # uname -s 00:05:36.730 23:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.730 23:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.730 23:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.730 23:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.730 23:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.730 23:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.730 23:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.730 23:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.730 23:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.730 23:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.730 WARNING: No tests are enabled so not running JSON configuration tests 00:05:36.730 23:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:05:36.730 23:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:05:36.730 23:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.730 23:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.730 23:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.730 23:59:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.730 23:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.730 23:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.730 23:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.730 23:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.730 23:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.730 23:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.730 23:59:51 -- paths/export.sh@5 -- # export PATH 00:05:36.730 23:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.730 23:59:51 -- nvmf/common.sh@46 -- # : 0 00:05:36.730 23:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:36.730 23:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:36.730 23:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:36.730 23:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.730 23:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.730 23:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:36.730 23:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:36.730 23:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:36.730 23:59:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:36.730 23:59:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:36.730 23:59:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:36.730 23:59:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.730 23:59:51 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:36.730 23:59:51 -- json_config/json_config.sh@27 -- # exit 0 00:05:36.730 00:05:36.730 real 0m0.134s 00:05:36.730 user 0m0.075s 00:05:36.730 sys 0m0.057s 00:05:36.730 23:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.730 23:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.730 ************************************ 00:05:36.730 END TEST json_config 00:05:36.730 ************************************ 00:05:36.730 23:59:51 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.730 23:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.730 23:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.730 ************************************ 00:05:36.730 START TEST json_config_extra_key 00:05:36.730 ************************************ 00:05:36.730 23:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.730 23:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.730 23:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.730 23:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.730 23:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.730 23:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.730 23:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.731 23:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.731 23:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.731 23:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.731 23:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.731 23:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.731 23:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.731 23:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.731 23:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.731 23:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.731 23:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.731 23:59:51 -- scripts/common.sh@344 -- # : 1 00:05:36.731 23:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.731 23:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.731 23:59:51 -- scripts/common.sh@364 -- # decimal 1 00:05:36.731 23:59:51 -- scripts/common.sh@352 -- # local d=1 00:05:36.731 23:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.731 23:59:51 -- scripts/common.sh@354 -- # echo 1 00:05:36.731 23:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.731 23:59:51 -- scripts/common.sh@365 -- # decimal 2 00:05:36.731 23:59:51 -- scripts/common.sh@352 -- # local d=2 00:05:36.731 23:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.731 23:59:51 -- scripts/common.sh@354 -- # echo 2 00:05:36.731 23:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.731 23:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.731 23:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.731 23:59:51 -- scripts/common.sh@367 -- # return 0 00:05:36.731 23:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.731 23:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.731 --rc genhtml_branch_coverage=1 00:05:36.731 --rc genhtml_function_coverage=1 00:05:36.731 --rc genhtml_legend=1 00:05:36.731 --rc geninfo_all_blocks=1 00:05:36.731 --rc geninfo_unexecuted_blocks=1 00:05:36.731 00:05:36.731 ' 00:05:36.731 23:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.731 --rc genhtml_branch_coverage=1 00:05:36.731 --rc genhtml_function_coverage=1 00:05:36.731 --rc genhtml_legend=1 00:05:36.731 --rc geninfo_all_blocks=1 00:05:36.731 --rc geninfo_unexecuted_blocks=1 00:05:36.731 00:05:36.731 ' 00:05:36.731 23:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.731 --rc genhtml_branch_coverage=1 00:05:36.731 --rc genhtml_function_coverage=1 00:05:36.731 --rc genhtml_legend=1 00:05:36.731 --rc geninfo_all_blocks=1 00:05:36.731 --rc geninfo_unexecuted_blocks=1 00:05:36.731 00:05:36.731 ' 00:05:36.731 23:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.731 --rc genhtml_branch_coverage=1 00:05:36.731 --rc genhtml_function_coverage=1 00:05:36.731 --rc genhtml_legend=1 00:05:36.731 --rc geninfo_all_blocks=1 00:05:36.731 --rc geninfo_unexecuted_blocks=1 00:05:36.731 00:05:36.731 ' 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.731 23:59:51 -- nvmf/common.sh@7 -- # uname -s 00:05:36.731 23:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.731 23:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.731 23:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.731 23:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.731 23:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.731 23:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.731 23:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.731 23:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.731 23:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.731 23:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.731 23:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:05:36.731 23:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=bcb6a7d8-a1d2-4c61-9a0c-c595d3d9b2c6 00:05:36.731 23:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.731 23:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.731 23:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.731 23:59:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.731 23:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.731 23:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.731 23:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.731 23:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.731 23:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.731 23:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.731 23:59:51 -- paths/export.sh@5 -- # export PATH 00:05:36.731 23:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.731 23:59:51 -- nvmf/common.sh@46 -- # : 0 00:05:36.731 23:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:36.731 23:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:36.731 23:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:36.731 23:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.731 23:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.731 23:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:36.731 23:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:36.731 23:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:36.731 INFO: launching applications... 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68867 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.731 Waiting for target to run... 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:36.731 23:59:51 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68867 /var/tmp/spdk_tgt.sock 00:05:36.731 23:59:51 -- common/autotest_common.sh@829 -- # '[' -z 68867 ']' 00:05:36.731 23:59:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.731 23:59:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.731 23:59:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.731 23:59:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.731 23:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 [2024-11-27 23:59:51.394142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.990 [2024-11-27 23:59:51.394393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68867 ] 00:05:37.248 [2024-11-27 23:59:51.696454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.248 [2024-11-27 23:59:51.714084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.248 [2024-11-27 23:59:51.714281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.816 00:05:37.816 INFO: shutting down applications... 00:05:37.816 23:59:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.816 23:59:52 -- common/autotest_common.sh@862 -- # return 0 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68867 ]] 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68867 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68867 00:05:37.816 23:59:52 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68867 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:38.384 SPDK target shutdown done 00:05:38.384 23:59:52 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:38.384 Success 00:05:38.384 00:05:38.384 real 0m1.535s 00:05:38.384 user 0m1.213s 00:05:38.384 sys 0m0.323s 00:05:38.384 23:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.384 23:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.384 ************************************ 00:05:38.384 END TEST json_config_extra_key 00:05:38.384 ************************************ 00:05:38.384 23:59:52 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.384 23:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.384 23:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.384 23:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.384 ************************************ 00:05:38.384 START TEST alias_rpc 00:05:38.384 ************************************ 00:05:38.384 23:59:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.384 * Looking for test storage... 00:05:38.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:38.384 23:59:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.384 23:59:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.384 23:59:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.384 23:59:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.384 23:59:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.384 23:59:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.384 23:59:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.384 23:59:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.384 23:59:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.384 23:59:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.384 23:59:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.384 23:59:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.384 23:59:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.384 23:59:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.384 23:59:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.384 23:59:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.384 23:59:52 -- scripts/common.sh@344 -- # : 1 00:05:38.384 23:59:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.384 23:59:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.384 23:59:52 -- scripts/common.sh@364 -- # decimal 1 00:05:38.384 23:59:52 -- scripts/common.sh@352 -- # local d=1 00:05:38.384 23:59:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.384 23:59:52 -- scripts/common.sh@354 -- # echo 1 00:05:38.384 23:59:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.384 23:59:52 -- scripts/common.sh@365 -- # decimal 2 00:05:38.384 23:59:52 -- scripts/common.sh@352 -- # local d=2 00:05:38.384 23:59:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.384 23:59:52 -- scripts/common.sh@354 -- # echo 2 00:05:38.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.384 23:59:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.384 23:59:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.384 23:59:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.384 23:59:52 -- scripts/common.sh@367 -- # return 0 00:05:38.384 23:59:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.384 23:59:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.384 --rc genhtml_branch_coverage=1 00:05:38.384 --rc genhtml_function_coverage=1 00:05:38.384 --rc genhtml_legend=1 00:05:38.384 --rc geninfo_all_blocks=1 00:05:38.384 --rc geninfo_unexecuted_blocks=1 00:05:38.384 00:05:38.384 ' 00:05:38.384 23:59:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.384 --rc genhtml_branch_coverage=1 00:05:38.384 --rc genhtml_function_coverage=1 00:05:38.384 --rc genhtml_legend=1 00:05:38.385 --rc geninfo_all_blocks=1 00:05:38.385 --rc geninfo_unexecuted_blocks=1 00:05:38.385 00:05:38.385 ' 00:05:38.385 23:59:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.385 --rc genhtml_branch_coverage=1 00:05:38.385 --rc genhtml_function_coverage=1 00:05:38.385 --rc genhtml_legend=1 00:05:38.385 --rc geninfo_all_blocks=1 00:05:38.385 --rc geninfo_unexecuted_blocks=1 00:05:38.385 00:05:38.385 ' 00:05:38.385 23:59:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.385 --rc genhtml_branch_coverage=1 00:05:38.385 --rc genhtml_function_coverage=1 00:05:38.385 --rc genhtml_legend=1 00:05:38.385 --rc geninfo_all_blocks=1 00:05:38.385 --rc geninfo_unexecuted_blocks=1 00:05:38.385 00:05:38.385 ' 00:05:38.385 23:59:52 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.385 23:59:52 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68945 00:05:38.385 23:59:52 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68945 00:05:38.385 23:59:52 -- common/autotest_common.sh@829 -- # '[' -z 68945 ']' 00:05:38.385 23:59:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.385 23:59:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.385 23:59:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.385 23:59:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.385 23:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.385 23:59:52 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.385 [2024-11-27 23:59:52.945925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.385 [2024-11-27 23:59:52.946035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68945 ] 00:05:38.643 [2024-11-27 23:59:53.118137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.643 [2024-11-27 23:59:53.167010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.643 [2024-11-27 23:59:53.167466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.211 23:59:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.211 23:59:53 -- common/autotest_common.sh@862 -- # return 0 00:05:39.211 23:59:53 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.469 23:59:53 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68945 00:05:39.469 23:59:53 -- common/autotest_common.sh@936 -- # '[' -z 68945 ']' 00:05:39.469 23:59:53 -- common/autotest_common.sh@940 -- # kill -0 68945 00:05:39.469 23:59:53 -- common/autotest_common.sh@941 -- # uname 00:05:39.469 23:59:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.469 23:59:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68945 00:05:39.469 killing process with pid 68945 00:05:39.469 23:59:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.469 23:59:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.469 23:59:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68945' 00:05:39.469 23:59:53 -- common/autotest_common.sh@955 -- # kill 68945 00:05:39.469 23:59:53 -- common/autotest_common.sh@960 -- # wait 68945 00:05:39.727 ************************************ 00:05:39.727 END TEST alias_rpc 00:05:39.727 ************************************ 00:05:39.727 00:05:39.727 real 0m1.498s 00:05:39.727 user 0m1.623s 00:05:39.727 sys 0m0.338s 00:05:39.727 23:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.727 23:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.727 23:59:54 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:39.727 23:59:54 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.728 23:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.728 23:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.728 23:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.728 ************************************ 00:05:39.728 START TEST spdkcli_tcp 00:05:39.728 ************************************ 00:05:39.728 23:59:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.986 * Looking for test storage... 00:05:39.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.986 23:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.987 23:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.987 23:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.987 23:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.987 23:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.987 23:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.987 23:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.987 23:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.987 23:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.987 23:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.987 23:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.987 23:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.987 23:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.987 23:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.987 23:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.987 23:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.987 23:59:54 -- scripts/common.sh@344 -- # : 1 00:05:39.987 23:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.987 23:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.987 23:59:54 -- scripts/common.sh@364 -- # decimal 1 00:05:39.987 23:59:54 -- scripts/common.sh@352 -- # local d=1 00:05:39.987 23:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.987 23:59:54 -- scripts/common.sh@354 -- # echo 1 00:05:39.987 23:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.987 23:59:54 -- scripts/common.sh@365 -- # decimal 2 00:05:39.987 23:59:54 -- scripts/common.sh@352 -- # local d=2 00:05:39.987 23:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.987 23:59:54 -- scripts/common.sh@354 -- # echo 2 00:05:39.987 23:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.987 23:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.987 23:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.987 23:59:54 -- scripts/common.sh@367 -- # return 0 00:05:39.987 23:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.987 23:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.987 --rc genhtml_branch_coverage=1 00:05:39.987 --rc genhtml_function_coverage=1 00:05:39.987 --rc genhtml_legend=1 00:05:39.987 --rc geninfo_all_blocks=1 00:05:39.987 --rc geninfo_unexecuted_blocks=1 00:05:39.987 00:05:39.987 ' 00:05:39.987 23:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.987 --rc genhtml_branch_coverage=1 00:05:39.987 --rc genhtml_function_coverage=1 00:05:39.987 --rc genhtml_legend=1 00:05:39.987 --rc geninfo_all_blocks=1 00:05:39.987 --rc geninfo_unexecuted_blocks=1 00:05:39.987 00:05:39.987 ' 00:05:39.987 23:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.987 --rc genhtml_branch_coverage=1 00:05:39.987 --rc genhtml_function_coverage=1 00:05:39.987 --rc genhtml_legend=1 00:05:39.987 --rc geninfo_all_blocks=1 00:05:39.987 --rc geninfo_unexecuted_blocks=1 00:05:39.987 00:05:39.987 ' 00:05:39.987 23:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.987 --rc genhtml_branch_coverage=1 00:05:39.987 --rc genhtml_function_coverage=1 00:05:39.987 --rc genhtml_legend=1 00:05:39.987 --rc geninfo_all_blocks=1 00:05:39.987 --rc geninfo_unexecuted_blocks=1 00:05:39.987 00:05:39.987 ' 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.987 23:59:54 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.987 23:59:54 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.987 23:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.987 23:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69022 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@27 -- # waitforlisten 69022 00:05:39.987 23:59:54 -- common/autotest_common.sh@829 -- # '[' -z 69022 ']' 00:05:39.987 23:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.987 23:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.987 23:59:54 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.987 23:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.987 23:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.987 23:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 [2024-11-27 23:59:54.490707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.987 [2024-11-27 23:59:54.490830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69022 ] 00:05:40.245 [2024-11-27 23:59:54.637294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.246 [2024-11-27 23:59:54.667918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.246 [2024-11-27 23:59:54.668411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.246 [2024-11-27 23:59:54.668469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.811 23:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.811 23:59:55 -- common/autotest_common.sh@862 -- # return 0 00:05:40.811 23:59:55 -- spdkcli/tcp.sh@31 -- # socat_pid=69035 00:05:40.811 23:59:55 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.811 23:59:55 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:41.068 [ 00:05:41.068 "bdev_malloc_delete", 00:05:41.068 "bdev_malloc_create", 00:05:41.068 "bdev_null_resize", 00:05:41.068 "bdev_null_delete", 00:05:41.068 "bdev_null_create", 00:05:41.068 "bdev_nvme_cuse_unregister", 00:05:41.068 "bdev_nvme_cuse_register", 00:05:41.068 "bdev_opal_new_user", 00:05:41.068 "bdev_opal_set_lock_state", 00:05:41.068 "bdev_opal_delete", 00:05:41.068 "bdev_opal_get_info", 00:05:41.068 "bdev_opal_create", 00:05:41.069 "bdev_nvme_opal_revert", 00:05:41.069 "bdev_nvme_opal_init", 00:05:41.069 "bdev_nvme_send_cmd", 00:05:41.069 "bdev_nvme_get_path_iostat", 00:05:41.069 "bdev_nvme_get_mdns_discovery_info", 00:05:41.069 "bdev_nvme_stop_mdns_discovery", 00:05:41.069 "bdev_nvme_start_mdns_discovery", 00:05:41.069 "bdev_nvme_set_multipath_policy", 00:05:41.069 "bdev_nvme_set_preferred_path", 00:05:41.069 "bdev_nvme_get_io_paths", 00:05:41.069 "bdev_nvme_remove_error_injection", 00:05:41.069 "bdev_nvme_add_error_injection", 00:05:41.069 "bdev_nvme_get_discovery_info", 00:05:41.069 "bdev_nvme_stop_discovery", 00:05:41.069 "bdev_nvme_start_discovery", 00:05:41.069 "bdev_nvme_get_controller_health_info", 00:05:41.069 "bdev_nvme_disable_controller", 00:05:41.069 "bdev_nvme_enable_controller", 00:05:41.069 "bdev_nvme_reset_controller", 00:05:41.069 "bdev_nvme_get_transport_statistics", 00:05:41.069 "bdev_nvme_apply_firmware", 00:05:41.069 "bdev_nvme_detach_controller", 00:05:41.069 "bdev_nvme_get_controllers", 00:05:41.069 "bdev_nvme_attach_controller", 00:05:41.069 "bdev_nvme_set_hotplug", 00:05:41.069 "bdev_nvme_set_options", 00:05:41.069 "bdev_passthru_delete", 00:05:41.069 "bdev_passthru_create", 00:05:41.069 "bdev_lvol_grow_lvstore", 00:05:41.069 "bdev_lvol_get_lvols", 00:05:41.069 "bdev_lvol_get_lvstores", 00:05:41.069 "bdev_lvol_delete", 00:05:41.069 "bdev_lvol_set_read_only", 00:05:41.069 "bdev_lvol_resize", 00:05:41.069 "bdev_lvol_decouple_parent", 00:05:41.069 "bdev_lvol_inflate", 00:05:41.069 "bdev_lvol_rename", 00:05:41.069 "bdev_lvol_clone_bdev", 00:05:41.069 "bdev_lvol_clone", 00:05:41.069 "bdev_lvol_snapshot", 00:05:41.069 "bdev_lvol_create", 00:05:41.069 "bdev_lvol_delete_lvstore", 00:05:41.069 "bdev_lvol_rename_lvstore", 00:05:41.069 "bdev_lvol_create_lvstore", 00:05:41.069 "bdev_raid_set_options", 00:05:41.069 "bdev_raid_remove_base_bdev", 00:05:41.069 "bdev_raid_add_base_bdev", 00:05:41.069 "bdev_raid_delete", 00:05:41.069 "bdev_raid_create", 00:05:41.069 "bdev_raid_get_bdevs", 00:05:41.069 "bdev_error_inject_error", 00:05:41.069 "bdev_error_delete", 00:05:41.069 "bdev_error_create", 00:05:41.069 "bdev_split_delete", 00:05:41.069 "bdev_split_create", 00:05:41.069 "bdev_delay_delete", 00:05:41.069 "bdev_delay_create", 00:05:41.069 "bdev_delay_update_latency", 00:05:41.069 "bdev_zone_block_delete", 00:05:41.069 "bdev_zone_block_create", 00:05:41.069 "blobfs_create", 00:05:41.069 "blobfs_detect", 00:05:41.069 "blobfs_set_cache_size", 00:05:41.069 "bdev_xnvme_delete", 00:05:41.069 "bdev_xnvme_create", 00:05:41.069 "bdev_aio_delete", 00:05:41.069 "bdev_aio_rescan", 00:05:41.069 "bdev_aio_create", 00:05:41.069 "bdev_ftl_set_property", 00:05:41.069 "bdev_ftl_get_properties", 00:05:41.069 "bdev_ftl_get_stats", 00:05:41.069 "bdev_ftl_unmap", 00:05:41.069 "bdev_ftl_unload", 00:05:41.069 "bdev_ftl_delete", 00:05:41.069 "bdev_ftl_load", 00:05:41.069 "bdev_ftl_create", 00:05:41.069 "bdev_virtio_attach_controller", 00:05:41.069 "bdev_virtio_scsi_get_devices", 00:05:41.069 "bdev_virtio_detach_controller", 00:05:41.069 "bdev_virtio_blk_set_hotplug", 00:05:41.069 "bdev_iscsi_delete", 00:05:41.069 "bdev_iscsi_create", 00:05:41.069 "bdev_iscsi_set_options", 00:05:41.069 "accel_error_inject_error", 00:05:41.069 "ioat_scan_accel_module", 00:05:41.069 "dsa_scan_accel_module", 00:05:41.069 "iaa_scan_accel_module", 00:05:41.069 "iscsi_set_options", 00:05:41.069 "iscsi_get_auth_groups", 00:05:41.069 "iscsi_auth_group_remove_secret", 00:05:41.069 "iscsi_auth_group_add_secret", 00:05:41.069 "iscsi_delete_auth_group", 00:05:41.069 "iscsi_create_auth_group", 00:05:41.069 "iscsi_set_discovery_auth", 00:05:41.069 "iscsi_get_options", 00:05:41.069 "iscsi_target_node_request_logout", 00:05:41.069 "iscsi_target_node_set_redirect", 00:05:41.069 "iscsi_target_node_set_auth", 00:05:41.069 "iscsi_target_node_add_lun", 00:05:41.069 "iscsi_get_connections", 00:05:41.069 "iscsi_portal_group_set_auth", 00:05:41.069 "iscsi_start_portal_group", 00:05:41.069 "iscsi_delete_portal_group", 00:05:41.069 "iscsi_create_portal_group", 00:05:41.069 "iscsi_get_portal_groups", 00:05:41.069 "iscsi_delete_target_node", 00:05:41.069 "iscsi_target_node_remove_pg_ig_maps", 00:05:41.069 "iscsi_target_node_add_pg_ig_maps", 00:05:41.069 "iscsi_create_target_node", 00:05:41.069 "iscsi_get_target_nodes", 00:05:41.069 "iscsi_delete_initiator_group", 00:05:41.069 "iscsi_initiator_group_remove_initiators", 00:05:41.069 "iscsi_initiator_group_add_initiators", 00:05:41.069 "iscsi_create_initiator_group", 00:05:41.069 "iscsi_get_initiator_groups", 00:05:41.069 "nvmf_set_crdt", 00:05:41.069 "nvmf_set_config", 00:05:41.069 "nvmf_set_max_subsystems", 00:05:41.069 "nvmf_subsystem_get_listeners", 00:05:41.069 "nvmf_subsystem_get_qpairs", 00:05:41.069 "nvmf_subsystem_get_controllers", 00:05:41.069 "nvmf_get_stats", 00:05:41.069 "nvmf_get_transports", 00:05:41.069 "nvmf_create_transport", 00:05:41.069 "nvmf_get_targets", 00:05:41.069 "nvmf_delete_target", 00:05:41.069 "nvmf_create_target", 00:05:41.069 "nvmf_subsystem_allow_any_host", 00:05:41.069 "nvmf_subsystem_remove_host", 00:05:41.069 "nvmf_subsystem_add_host", 00:05:41.069 "nvmf_subsystem_remove_ns", 00:05:41.069 "nvmf_subsystem_add_ns", 00:05:41.069 "nvmf_subsystem_listener_set_ana_state", 00:05:41.069 "nvmf_discovery_get_referrals", 00:05:41.069 "nvmf_discovery_remove_referral", 00:05:41.069 "nvmf_discovery_add_referral", 00:05:41.069 "nvmf_subsystem_remove_listener", 00:05:41.069 "nvmf_subsystem_add_listener", 00:05:41.069 "nvmf_delete_subsystem", 00:05:41.069 "nvmf_create_subsystem", 00:05:41.069 "nvmf_get_subsystems", 00:05:41.069 "env_dpdk_get_mem_stats", 00:05:41.069 "nbd_get_disks", 00:05:41.069 "nbd_stop_disk", 00:05:41.069 "nbd_start_disk", 00:05:41.069 "ublk_recover_disk", 00:05:41.069 "ublk_get_disks", 00:05:41.069 "ublk_stop_disk", 00:05:41.069 "ublk_start_disk", 00:05:41.069 "ublk_destroy_target", 00:05:41.069 "ublk_create_target", 00:05:41.069 "virtio_blk_create_transport", 00:05:41.069 "virtio_blk_get_transports", 00:05:41.069 "vhost_controller_set_coalescing", 00:05:41.069 "vhost_get_controllers", 00:05:41.069 "vhost_delete_controller", 00:05:41.069 "vhost_create_blk_controller", 00:05:41.069 "vhost_scsi_controller_remove_target", 00:05:41.069 "vhost_scsi_controller_add_target", 00:05:41.069 "vhost_start_scsi_controller", 00:05:41.069 "vhost_create_scsi_controller", 00:05:41.069 "thread_set_cpumask", 00:05:41.069 "framework_get_scheduler", 00:05:41.069 "framework_set_scheduler", 00:05:41.069 "framework_get_reactors", 00:05:41.069 "thread_get_io_channels", 00:05:41.069 "thread_get_pollers", 00:05:41.069 "thread_get_stats", 00:05:41.069 "framework_monitor_context_switch", 00:05:41.069 "spdk_kill_instance", 00:05:41.069 "log_enable_timestamps", 00:05:41.069 "log_get_flags", 00:05:41.069 "log_clear_flag", 00:05:41.069 "log_set_flag", 00:05:41.069 "log_get_level", 00:05:41.069 "log_set_level", 00:05:41.069 "log_get_print_level", 00:05:41.069 "log_set_print_level", 00:05:41.069 "framework_enable_cpumask_locks", 00:05:41.069 "framework_disable_cpumask_locks", 00:05:41.069 "framework_wait_init", 00:05:41.069 "framework_start_init", 00:05:41.069 "scsi_get_devices", 00:05:41.069 "bdev_get_histogram", 00:05:41.069 "bdev_enable_histogram", 00:05:41.069 "bdev_set_qos_limit", 00:05:41.069 "bdev_set_qd_sampling_period", 00:05:41.069 "bdev_get_bdevs", 00:05:41.070 "bdev_reset_iostat", 00:05:41.070 "bdev_get_iostat", 00:05:41.070 "bdev_examine", 00:05:41.070 "bdev_wait_for_examine", 00:05:41.070 "bdev_set_options", 00:05:41.070 "notify_get_notifications", 00:05:41.070 "notify_get_types", 00:05:41.070 "accel_get_stats", 00:05:41.070 "accel_set_options", 00:05:41.070 "accel_set_driver", 00:05:41.070 "accel_crypto_key_destroy", 00:05:41.070 "accel_crypto_keys_get", 00:05:41.070 "accel_crypto_key_create", 00:05:41.070 "accel_assign_opc", 00:05:41.070 "accel_get_module_info", 00:05:41.070 "accel_get_opc_assignments", 00:05:41.070 "vmd_rescan", 00:05:41.070 "vmd_remove_device", 00:05:41.070 "vmd_enable", 00:05:41.070 "sock_set_default_impl", 00:05:41.070 "sock_impl_set_options", 00:05:41.070 "sock_impl_get_options", 00:05:41.070 "iobuf_get_stats", 00:05:41.070 "iobuf_set_options", 00:05:41.070 "framework_get_pci_devices", 00:05:41.070 "framework_get_config", 00:05:41.070 "framework_get_subsystems", 00:05:41.070 "trace_get_info", 00:05:41.070 "trace_get_tpoint_group_mask", 00:05:41.070 "trace_disable_tpoint_group", 00:05:41.070 "trace_enable_tpoint_group", 00:05:41.070 "trace_clear_tpoint_mask", 00:05:41.070 "trace_set_tpoint_mask", 00:05:41.070 "spdk_get_version", 00:05:41.070 "rpc_get_methods" 00:05:41.070 ] 00:05:41.070 23:59:55 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:41.070 23:59:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.070 23:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.070 23:59:55 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:41.070 23:59:55 -- spdkcli/tcp.sh@38 -- # killprocess 69022 00:05:41.070 23:59:55 -- common/autotest_common.sh@936 -- # '[' -z 69022 ']' 00:05:41.070 23:59:55 -- common/autotest_common.sh@940 -- # kill -0 69022 00:05:41.070 23:59:55 -- common/autotest_common.sh@941 -- # uname 00:05:41.070 23:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.070 23:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69022 00:05:41.070 killing process with pid 69022 00:05:41.070 23:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.070 23:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.070 23:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69022' 00:05:41.070 23:59:55 -- common/autotest_common.sh@955 -- # kill 69022 00:05:41.070 23:59:55 -- common/autotest_common.sh@960 -- # wait 69022 00:05:41.327 00:05:41.327 real 0m1.497s 00:05:41.327 user 0m2.649s 00:05:41.327 sys 0m0.364s 00:05:41.327 23:59:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.327 ************************************ 00:05:41.327 END TEST spdkcli_tcp 00:05:41.327 ************************************ 00:05:41.327 23:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.327 23:59:55 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.327 23:59:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.327 23:59:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.327 23:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.327 ************************************ 00:05:41.327 START TEST dpdk_mem_utility 00:05:41.327 ************************************ 00:05:41.327 23:59:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.327 * Looking for test storage... 00:05:41.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:41.327 23:59:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.327 23:59:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.327 23:59:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.586 23:59:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.586 23:59:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.586 23:59:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.586 23:59:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.586 23:59:55 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.586 23:59:55 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.586 23:59:55 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.586 23:59:55 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.586 23:59:55 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.586 23:59:55 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.586 23:59:55 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.586 23:59:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.586 23:59:55 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.586 23:59:55 -- scripts/common.sh@344 -- # : 1 00:05:41.586 23:59:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.586 23:59:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.586 23:59:55 -- scripts/common.sh@364 -- # decimal 1 00:05:41.586 23:59:55 -- scripts/common.sh@352 -- # local d=1 00:05:41.586 23:59:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.586 23:59:55 -- scripts/common.sh@354 -- # echo 1 00:05:41.586 23:59:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.586 23:59:55 -- scripts/common.sh@365 -- # decimal 2 00:05:41.586 23:59:55 -- scripts/common.sh@352 -- # local d=2 00:05:41.586 23:59:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.586 23:59:55 -- scripts/common.sh@354 -- # echo 2 00:05:41.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.586 23:59:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.586 23:59:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.586 23:59:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.586 23:59:55 -- scripts/common.sh@367 -- # return 0 00:05:41.586 23:59:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.586 23:59:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.586 --rc genhtml_branch_coverage=1 00:05:41.586 --rc genhtml_function_coverage=1 00:05:41.586 --rc genhtml_legend=1 00:05:41.586 --rc geninfo_all_blocks=1 00:05:41.586 --rc geninfo_unexecuted_blocks=1 00:05:41.586 00:05:41.586 ' 00:05:41.586 23:59:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.586 --rc genhtml_branch_coverage=1 00:05:41.586 --rc genhtml_function_coverage=1 00:05:41.586 --rc genhtml_legend=1 00:05:41.586 --rc geninfo_all_blocks=1 00:05:41.586 --rc geninfo_unexecuted_blocks=1 00:05:41.586 00:05:41.586 ' 00:05:41.586 23:59:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.586 --rc genhtml_branch_coverage=1 00:05:41.586 --rc genhtml_function_coverage=1 00:05:41.586 --rc genhtml_legend=1 00:05:41.586 --rc geninfo_all_blocks=1 00:05:41.586 --rc geninfo_unexecuted_blocks=1 00:05:41.586 00:05:41.586 ' 00:05:41.586 23:59:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.586 --rc genhtml_branch_coverage=1 00:05:41.586 --rc genhtml_function_coverage=1 00:05:41.586 --rc genhtml_legend=1 00:05:41.586 --rc geninfo_all_blocks=1 00:05:41.586 --rc geninfo_unexecuted_blocks=1 00:05:41.586 00:05:41.586 ' 00:05:41.586 23:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.586 23:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69117 00:05:41.586 23:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69117 00:05:41.586 23:59:55 -- common/autotest_common.sh@829 -- # '[' -z 69117 ']' 00:05:41.586 23:59:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.586 23:59:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.586 23:59:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.586 23:59:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.586 23:59:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.586 23:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.586 [2024-11-27 23:59:56.014266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.586 [2024-11-27 23:59:56.014393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69117 ] 00:05:41.586 [2024-11-27 23:59:56.163154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.844 [2024-11-27 23:59:56.193036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.844 [2024-11-27 23:59:56.193229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.412 23:59:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.412 23:59:56 -- common/autotest_common.sh@862 -- # return 0 00:05:42.412 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.412 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.412 23:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.412 23:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.412 { 00:05:42.412 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.412 } 00:05:42.412 23:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.412 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.412 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:42.412 1 heaps totaling size 814.000000 MiB 00:05:42.412 size: 814.000000 MiB heap id: 0 00:05:42.412 end heaps---------- 00:05:42.412 8 mempools totaling size 598.116089 MiB 00:05:42.412 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.412 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.412 size: 84.521057 MiB name: bdev_io_69117 00:05:42.412 size: 51.011292 MiB name: evtpool_69117 00:05:42.412 size: 50.003479 MiB name: msgpool_69117 00:05:42.412 size: 21.763794 MiB name: PDU_Pool 00:05:42.412 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.412 size: 0.026123 MiB name: Session_Pool 00:05:42.412 end mempools------- 00:05:42.412 6 memzones totaling size 4.142822 MiB 00:05:42.412 size: 1.000366 MiB name: RG_ring_0_69117 00:05:42.412 size: 1.000366 MiB name: RG_ring_1_69117 00:05:42.412 size: 1.000366 MiB name: RG_ring_4_69117 00:05:42.412 size: 1.000366 MiB name: RG_ring_5_69117 00:05:42.412 size: 0.125366 MiB name: RG_ring_2_69117 00:05:42.412 size: 0.015991 MiB name: RG_ring_3_69117 00:05:42.413 end memzones------- 00:05:42.413 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.413 heap id: 0 total size: 814.000000 MiB number of busy elements: 309 number of free elements: 15 00:05:42.413 list of free elements. size: 12.470276 MiB 00:05:42.413 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:42.413 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:42.413 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:42.413 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:42.413 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:42.413 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:42.413 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:42.413 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:42.413 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:42.413 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:05:42.413 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:42.413 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:42.413 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:42.413 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:42.413 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:42.413 list of standard malloc elements. size: 199.267151 MiB 00:05:42.413 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:42.413 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:42.413 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:42.413 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:42.413 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:42.413 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.413 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:42.413 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.413 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:42.413 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:42.413 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:42.414 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:42.414 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:42.415 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:42.415 list of memzone associated elements. size: 602.262573 MiB 00:05:42.415 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:42.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.415 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:42.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.415 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:42.415 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_69117_0 00:05:42.415 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:42.415 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69117_0 00:05:42.415 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:42.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69117_0 00:05:42.415 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:42.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.415 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:42.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.415 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:42.415 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69117 00:05:42.415 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:42.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69117 00:05:42.415 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69117 00:05:42.415 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:42.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.415 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:42.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.415 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:42.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.415 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:42.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.415 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:42.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69117 00:05:42.415 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:42.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69117 00:05:42.415 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:42.415 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69117 00:05:42.415 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:42.415 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69117 00:05:42.415 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:42.415 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69117 00:05:42.415 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:42.415 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.415 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:42.415 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.415 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:42.415 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.415 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:42.415 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69117 00:05:42.415 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:42.415 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.415 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:42.415 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.415 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:42.415 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69117 00:05:42.415 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:42.415 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.415 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:42.415 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69117 00:05:42.415 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:42.415 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69117 00:05:42.415 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:42.415 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.415 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.415 23:59:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69117 00:05:42.415 23:59:56 -- common/autotest_common.sh@936 -- # '[' -z 69117 ']' 00:05:42.415 23:59:56 -- common/autotest_common.sh@940 -- # kill -0 69117 00:05:42.415 23:59:56 -- common/autotest_common.sh@941 -- # uname 00:05:42.415 23:59:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.415 23:59:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69117 00:05:42.415 killing process with pid 69117 00:05:42.415 23:59:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.415 23:59:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.415 23:59:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69117' 00:05:42.415 23:59:56 -- common/autotest_common.sh@955 -- # kill 69117 00:05:42.415 23:59:56 -- common/autotest_common.sh@960 -- # wait 69117 00:05:42.674 00:05:42.674 real 0m1.382s 00:05:42.674 user 0m1.402s 00:05:42.674 sys 0m0.366s 00:05:42.674 23:59:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.674 ************************************ 00:05:42.674 END TEST dpdk_mem_utility 00:05:42.674 ************************************ 00:05:42.674 23:59:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.674 23:59:57 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.674 23:59:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.674 23:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.674 23:59:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.674 ************************************ 00:05:42.674 START TEST event 00:05:42.674 ************************************ 00:05:42.674 23:59:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.933 * Looking for test storage... 00:05:42.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.933 23:59:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.933 23:59:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.933 23:59:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.933 23:59:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.933 23:59:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.933 23:59:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.933 23:59:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.933 23:59:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.933 23:59:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.933 23:59:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.933 23:59:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.933 23:59:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.933 23:59:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.933 23:59:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.933 23:59:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.933 23:59:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.933 23:59:57 -- scripts/common.sh@344 -- # : 1 00:05:42.933 23:59:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.933 23:59:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.933 23:59:57 -- scripts/common.sh@364 -- # decimal 1 00:05:42.934 23:59:57 -- scripts/common.sh@352 -- # local d=1 00:05:42.934 23:59:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.934 23:59:57 -- scripts/common.sh@354 -- # echo 1 00:05:42.934 23:59:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.934 23:59:57 -- scripts/common.sh@365 -- # decimal 2 00:05:42.934 23:59:57 -- scripts/common.sh@352 -- # local d=2 00:05:42.934 23:59:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.934 23:59:57 -- scripts/common.sh@354 -- # echo 2 00:05:42.934 23:59:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.934 23:59:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.934 23:59:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.934 23:59:57 -- scripts/common.sh@367 -- # return 0 00:05:42.934 23:59:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.934 23:59:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.934 --rc genhtml_branch_coverage=1 00:05:42.934 --rc genhtml_function_coverage=1 00:05:42.934 --rc genhtml_legend=1 00:05:42.934 --rc geninfo_all_blocks=1 00:05:42.934 --rc geninfo_unexecuted_blocks=1 00:05:42.934 00:05:42.934 ' 00:05:42.934 23:59:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.934 --rc genhtml_branch_coverage=1 00:05:42.934 --rc genhtml_function_coverage=1 00:05:42.934 --rc genhtml_legend=1 00:05:42.934 --rc geninfo_all_blocks=1 00:05:42.934 --rc geninfo_unexecuted_blocks=1 00:05:42.934 00:05:42.934 ' 00:05:42.934 23:59:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.934 --rc genhtml_branch_coverage=1 00:05:42.934 --rc genhtml_function_coverage=1 00:05:42.934 --rc genhtml_legend=1 00:05:42.934 --rc geninfo_all_blocks=1 00:05:42.934 --rc geninfo_unexecuted_blocks=1 00:05:42.934 00:05:42.934 ' 00:05:42.934 23:59:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.934 --rc genhtml_branch_coverage=1 00:05:42.934 --rc genhtml_function_coverage=1 00:05:42.934 --rc genhtml_legend=1 00:05:42.934 --rc geninfo_all_blocks=1 00:05:42.934 --rc geninfo_unexecuted_blocks=1 00:05:42.934 00:05:42.934 ' 00:05:42.934 23:59:57 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.934 23:59:57 -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.934 23:59:57 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.934 23:59:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:42.934 23:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.934 23:59:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.934 ************************************ 00:05:42.934 START TEST event_perf 00:05:42.934 ************************************ 00:05:42.934 23:59:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.934 Running I/O for 1 seconds...[2024-11-27 23:59:57.416450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.934 [2024-11-27 23:59:57.416645] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69191 ] 00:05:43.192 [2024-11-27 23:59:57.561514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.192 [2024-11-27 23:59:57.595771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.192 [2024-11-27 23:59:57.596068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.192 [2024-11-27 23:59:57.596201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.192 [2024-11-27 23:59:57.596281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.126 Running I/O for 1 seconds... 00:05:44.126 lcore 0: 204268 00:05:44.126 lcore 1: 204268 00:05:44.126 lcore 2: 204269 00:05:44.126 lcore 3: 204265 00:05:44.126 done. 00:05:44.126 00:05:44.126 real 0m1.264s 00:05:44.126 user 0m4.077s 00:05:44.126 sys 0m0.073s 00:05:44.126 23:59:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.126 23:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 END TEST event_perf 00:05:44.126 ************************************ 00:05:44.126 23:59:58 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.126 23:59:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:44.126 23:59:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.126 23:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 START TEST event_reactor 00:05:44.126 ************************************ 00:05:44.126 23:59:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.126 [2024-11-27 23:59:58.713491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.126 [2024-11-27 23:59:58.713610] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69235 ] 00:05:44.384 [2024-11-27 23:59:58.860627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.384 [2024-11-27 23:59:58.890441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.759 test_start 00:05:45.759 oneshot 00:05:45.759 tick 100 00:05:45.759 tick 100 00:05:45.759 tick 250 00:05:45.759 tick 100 00:05:45.759 tick 100 00:05:45.759 tick 250 00:05:45.759 tick 500 00:05:45.759 tick 100 00:05:45.759 tick 100 00:05:45.759 tick 100 00:05:45.759 tick 250 00:05:45.759 tick 100 00:05:45.759 tick 100 00:05:45.759 test_end 00:05:45.759 00:05:45.759 real 0m1.261s 00:05:45.759 user 0m1.086s 00:05:45.759 sys 0m0.066s 00:05:45.759 ************************************ 00:05:45.759 END TEST event_reactor 00:05:45.759 ************************************ 00:05:45.759 23:59:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.759 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.759 23:59:59 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.759 23:59:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:45.759 23:59:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.759 23:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.759 ************************************ 00:05:45.759 START TEST event_reactor_perf 00:05:45.759 ************************************ 00:05:45.759 23:59:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.759 [2024-11-28 00:00:00.024181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.759 [2024-11-28 00:00:00.024440] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69267 ] 00:05:45.759 [2024-11-28 00:00:00.169678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.759 [2024-11-28 00:00:00.199656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.795 test_start 00:05:46.795 test_end 00:05:46.795 Performance: 312265 events per second 00:05:46.795 00:05:46.795 real 0m1.262s 00:05:46.795 user 0m1.082s 00:05:46.795 sys 0m0.071s 00:05:46.795 00:00:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.795 00:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 ************************************ 00:05:46.795 END TEST event_reactor_perf 00:05:46.795 ************************************ 00:05:46.795 00:00:01 -- event/event.sh@49 -- # uname -s 00:05:46.795 00:00:01 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.795 00:00:01 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.795 00:00:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.795 00:00:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.795 00:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.795 ************************************ 00:05:46.795 START TEST event_scheduler 00:05:46.795 ************************************ 00:05:46.795 00:00:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.796 * Looking for test storage... 00:05:46.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:46.796 00:00:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.796 00:00:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.796 00:00:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.054 00:00:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.054 00:00:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.054 00:00:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.054 00:00:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.054 00:00:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.054 00:00:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.054 00:00:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.054 00:00:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.054 00:00:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.054 00:00:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.054 00:00:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.054 00:00:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.054 00:00:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.054 00:00:01 -- scripts/common.sh@344 -- # : 1 00:05:47.054 00:00:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.054 00:00:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.054 00:00:01 -- scripts/common.sh@364 -- # decimal 1 00:05:47.054 00:00:01 -- scripts/common.sh@352 -- # local d=1 00:05:47.054 00:00:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.054 00:00:01 -- scripts/common.sh@354 -- # echo 1 00:05:47.054 00:00:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.054 00:00:01 -- scripts/common.sh@365 -- # decimal 2 00:05:47.054 00:00:01 -- scripts/common.sh@352 -- # local d=2 00:05:47.054 00:00:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.054 00:00:01 -- scripts/common.sh@354 -- # echo 2 00:05:47.054 00:00:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.054 00:00:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.054 00:00:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.054 00:00:01 -- scripts/common.sh@367 -- # return 0 00:05:47.054 00:00:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.054 00:00:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.054 --rc genhtml_branch_coverage=1 00:05:47.054 --rc genhtml_function_coverage=1 00:05:47.054 --rc genhtml_legend=1 00:05:47.054 --rc geninfo_all_blocks=1 00:05:47.054 --rc geninfo_unexecuted_blocks=1 00:05:47.054 00:05:47.054 ' 00:05:47.054 00:00:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.054 --rc genhtml_branch_coverage=1 00:05:47.054 --rc genhtml_function_coverage=1 00:05:47.054 --rc genhtml_legend=1 00:05:47.054 --rc geninfo_all_blocks=1 00:05:47.054 --rc geninfo_unexecuted_blocks=1 00:05:47.054 00:05:47.054 ' 00:05:47.054 00:00:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.054 --rc genhtml_branch_coverage=1 00:05:47.054 --rc genhtml_function_coverage=1 00:05:47.054 --rc genhtml_legend=1 00:05:47.054 --rc geninfo_all_blocks=1 00:05:47.054 --rc geninfo_unexecuted_blocks=1 00:05:47.054 00:05:47.054 ' 00:05:47.054 00:00:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.054 --rc genhtml_branch_coverage=1 00:05:47.054 --rc genhtml_function_coverage=1 00:05:47.054 --rc genhtml_legend=1 00:05:47.054 --rc geninfo_all_blocks=1 00:05:47.054 --rc geninfo_unexecuted_blocks=1 00:05:47.054 00:05:47.054 ' 00:05:47.054 00:00:01 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.054 00:00:01 -- scheduler/scheduler.sh@35 -- # scheduler_pid=69331 00:05:47.054 00:00:01 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.054 00:00:01 -- scheduler/scheduler.sh@37 -- # waitforlisten 69331 00:05:47.054 00:00:01 -- common/autotest_common.sh@829 -- # '[' -z 69331 ']' 00:05:47.054 00:00:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.054 00:00:01 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.054 00:00:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.054 00:00:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.054 00:00:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.054 00:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:47.054 [2024-11-28 00:00:01.495255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.054 [2024-11-28 00:00:01.495534] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69331 ] 00:05:47.054 [2024-11-28 00:00:01.640675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.311 [2024-11-28 00:00:01.672975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.311 [2024-11-28 00:00:01.673422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.311 [2024-11-28 00:00:01.673483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.311 [2024-11-28 00:00:01.673448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.878 00:00:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.878 00:00:02 -- common/autotest_common.sh@862 -- # return 0 00:05:47.878 00:00:02 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.878 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.878 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.878 POWER: Env isn't set yet! 00:05:47.878 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:47.878 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.878 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.878 POWER: Attempting to initialise PSTAT power management... 00:05:47.878 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.878 POWER: Cannot set governor of lcore 0 to performance 00:05:47.878 POWER: Attempting to initialise AMD PSTATE power management... 00:05:47.878 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.878 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.878 POWER: Attempting to initialise CPPC power management... 00:05:47.878 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.878 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.878 POWER: Attempting to initialise VM power management... 00:05:47.878 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:47.878 POWER: Unable to set Power Management Environment for lcore 0 00:05:47.878 [2024-11-28 00:00:02.326562] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:47.878 [2024-11-28 00:00:02.326579] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:47.879 [2024-11-28 00:00:02.326587] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.879 [2024-11-28 00:00:02.326615] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.879 [2024-11-28 00:00:02.326623] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.879 [2024-11-28 00:00:02.326644] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 [2024-11-28 00:00:02.378883] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.879 00:00:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.879 00:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 ************************************ 00:05:47.879 START TEST scheduler_create_thread 00:05:47.879 ************************************ 00:05:47.879 00:00:02 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 2 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 3 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 4 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 5 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 6 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 7 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 8 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 9 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 10 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.879 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.879 00:00:02 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.879 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.879 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.138 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.138 00:00:02 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.138 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.138 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.138 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.138 00:00:02 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.138 00:00:02 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.138 00:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.138 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.397 ************************************ 00:05:48.397 END TEST scheduler_create_thread 00:05:48.397 ************************************ 00:05:48.397 00:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.397 00:05:48.397 real 0m0.591s 00:05:48.397 user 0m0.015s 00:05:48.397 sys 0m0.003s 00:05:48.397 00:00:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.397 00:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.655 00:00:03 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.655 00:00:03 -- scheduler/scheduler.sh@46 -- # killprocess 69331 00:05:48.655 00:00:03 -- common/autotest_common.sh@936 -- # '[' -z 69331 ']' 00:05:48.655 00:00:03 -- common/autotest_common.sh@940 -- # kill -0 69331 00:05:48.655 00:00:03 -- common/autotest_common.sh@941 -- # uname 00:05:48.655 00:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.655 00:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69331 00:05:48.655 killing process with pid 69331 00:05:48.655 00:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:48.655 00:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:48.655 00:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69331' 00:05:48.655 00:00:03 -- common/autotest_common.sh@955 -- # kill 69331 00:05:48.655 00:00:03 -- common/autotest_common.sh@960 -- # wait 69331 00:05:48.915 [2024-11-28 00:00:03.460814] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.174 00:05:49.174 real 0m2.304s 00:05:49.174 user 0m4.468s 00:05:49.174 sys 0m0.309s 00:05:49.174 00:00:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.174 ************************************ 00:05:49.174 END TEST event_scheduler 00:05:49.174 ************************************ 00:05:49.174 00:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.174 00:00:03 -- event/event.sh@51 -- # modprobe -n nbd 00:05:49.174 00:00:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:49.174 00:00:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.174 00:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.174 00:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.174 ************************************ 00:05:49.174 START TEST app_repeat 00:05:49.174 ************************************ 00:05:49.174 00:00:03 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:49.174 00:00:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.174 00:00:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.174 00:00:03 -- event/event.sh@13 -- # local nbd_list 00:05:49.174 00:00:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.174 00:00:03 -- event/event.sh@14 -- # local bdev_list 00:05:49.174 00:00:03 -- event/event.sh@15 -- # local repeat_times=4 00:05:49.174 00:00:03 -- event/event.sh@17 -- # modprobe nbd 00:05:49.174 00:00:03 -- event/event.sh@19 -- # repeat_pid=69407 00:05:49.174 00:00:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.174 Process app_repeat pid: 69407 00:05:49.174 00:00:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69407' 00:05:49.174 00:00:03 -- event/event.sh@23 -- # for i in {0..2} 00:05:49.174 spdk_app_start Round 0 00:05:49.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.174 00:00:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:49.174 00:00:03 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:49.174 00:00:03 -- event/event.sh@25 -- # waitforlisten 69407 /var/tmp/spdk-nbd.sock 00:05:49.174 00:00:03 -- common/autotest_common.sh@829 -- # '[' -z 69407 ']' 00:05:49.174 00:00:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.174 00:00:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.174 00:00:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.174 00:00:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.174 00:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.174 [2024-11-28 00:00:03.688851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.174 [2024-11-28 00:00:03.688955] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69407 ] 00:05:49.432 [2024-11-28 00:00:03.836112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.432 [2024-11-28 00:00:03.866683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.432 [2024-11-28 00:00:03.866785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.999 00:00:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.999 00:00:04 -- common/autotest_common.sh@862 -- # return 0 00:05:49.999 00:00:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.258 Malloc0 00:05:50.258 00:00:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.515 Malloc1 00:05:50.515 00:00:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.515 00:00:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.515 /dev/nbd0 00:05:50.515 00:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.515 00:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.515 00:00:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:50.515 00:00:05 -- common/autotest_common.sh@867 -- # local i 00:05:50.515 00:00:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.515 00:00:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:50.774 00:00:05 -- common/autotest_common.sh@871 -- # break 00:05:50.774 00:00:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.774 1+0 records in 00:05:50.774 1+0 records out 00:05:50.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270367 s, 15.1 MB/s 00:05:50.774 00:00:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.774 00:00:05 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.774 00:00:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.774 00:00:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.774 00:00:05 -- common/autotest_common.sh@887 -- # return 0 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.774 /dev/nbd1 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.774 00:00:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.774 00:00:05 -- common/autotest_common.sh@867 -- # local i 00:05:50.774 00:00:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.774 00:00:05 -- common/autotest_common.sh@871 -- # break 00:05:50.774 00:00:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.774 00:00:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.774 1+0 records in 00:05:50.774 1+0 records out 00:05:50.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257571 s, 15.9 MB/s 00:05:50.774 00:00:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.774 00:00:05 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.774 00:00:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.774 00:00:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.774 00:00:05 -- common/autotest_common.sh@887 -- # return 0 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.774 00:00:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.032 { 00:05:51.032 "nbd_device": "/dev/nbd0", 00:05:51.032 "bdev_name": "Malloc0" 00:05:51.032 }, 00:05:51.032 { 00:05:51.032 "nbd_device": "/dev/nbd1", 00:05:51.032 "bdev_name": "Malloc1" 00:05:51.032 } 00:05:51.032 ]' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.032 { 00:05:51.032 "nbd_device": "/dev/nbd0", 00:05:51.032 "bdev_name": "Malloc0" 00:05:51.032 }, 00:05:51.032 { 00:05:51.032 "nbd_device": "/dev/nbd1", 00:05:51.032 "bdev_name": "Malloc1" 00:05:51.032 } 00:05:51.032 ]' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.032 /dev/nbd1' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.032 /dev/nbd1' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.032 256+0 records in 00:05:51.032 256+0 records out 00:05:51.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118638 s, 88.4 MB/s 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.032 256+0 records in 00:05:51.032 256+0 records out 00:05:51.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192778 s, 54.4 MB/s 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.032 00:00:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.032 256+0 records in 00:05:51.032 256+0 records out 00:05:51.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209192 s, 50.1 MB/s 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@41 -- # break 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.290 00:00:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@41 -- # break 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.549 00:00:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@65 -- # true 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.872 00:00:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.872 00:00:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.132 00:00:06 -- event/event.sh@35 -- # sleep 3 00:05:52.132 [2024-11-28 00:00:06.560684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.132 [2024-11-28 00:00:06.590680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.132 [2024-11-28 00:00:06.590784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.132 [2024-11-28 00:00:06.622498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.132 [2024-11-28 00:00:06.622545] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.412 spdk_app_start Round 1 00:05:55.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.412 00:00:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.412 00:00:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.412 00:00:09 -- event/event.sh@25 -- # waitforlisten 69407 /var/tmp/spdk-nbd.sock 00:05:55.412 00:00:09 -- common/autotest_common.sh@829 -- # '[' -z 69407 ']' 00:05:55.412 00:00:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.412 00:00:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.413 00:00:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.413 00:00:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.413 00:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.413 00:00:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.413 00:00:09 -- common/autotest_common.sh@862 -- # return 0 00:05:55.413 00:00:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.413 Malloc0 00:05:55.413 00:00:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.672 Malloc1 00:05:55.672 00:00:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.672 /dev/nbd0 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.672 00:00:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.672 00:00:10 -- common/autotest_common.sh@867 -- # local i 00:05:55.672 00:00:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.672 00:00:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.672 00:00:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.672 00:00:10 -- common/autotest_common.sh@871 -- # break 00:05:55.672 00:00:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.672 00:00:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.672 00:00:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.672 1+0 records in 00:05:55.672 1+0 records out 00:05:55.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019464 s, 21.0 MB/s 00:05:55.672 00:00:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.672 00:00:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:55.672 00:00:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.672 00:00:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.672 00:00:10 -- common/autotest_common.sh@887 -- # return 0 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.672 00:00:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.930 /dev/nbd1 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.930 00:00:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.930 00:00:10 -- common/autotest_common.sh@867 -- # local i 00:05:55.930 00:00:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.930 00:00:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.930 00:00:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.930 00:00:10 -- common/autotest_common.sh@871 -- # break 00:05:55.930 00:00:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.930 00:00:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.930 00:00:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.930 1+0 records in 00:05:55.930 1+0 records out 00:05:55.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191763 s, 21.4 MB/s 00:05:55.930 00:00:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.930 00:00:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:55.930 00:00:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.930 00:00:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.930 00:00:10 -- common/autotest_common.sh@887 -- # return 0 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.930 00:00:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.187 { 00:05:56.187 "nbd_device": "/dev/nbd0", 00:05:56.187 "bdev_name": "Malloc0" 00:05:56.187 }, 00:05:56.187 { 00:05:56.187 "nbd_device": "/dev/nbd1", 00:05:56.187 "bdev_name": "Malloc1" 00:05:56.187 } 00:05:56.187 ]' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.187 { 00:05:56.187 "nbd_device": "/dev/nbd0", 00:05:56.187 "bdev_name": "Malloc0" 00:05:56.187 }, 00:05:56.187 { 00:05:56.187 "nbd_device": "/dev/nbd1", 00:05:56.187 "bdev_name": "Malloc1" 00:05:56.187 } 00:05:56.187 ]' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.187 /dev/nbd1' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.187 /dev/nbd1' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.187 256+0 records in 00:05:56.187 256+0 records out 00:05:56.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00895731 s, 117 MB/s 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.187 256+0 records in 00:05:56.187 256+0 records out 00:05:56.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150226 s, 69.8 MB/s 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.187 00:00:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.188 256+0 records in 00:05:56.188 256+0 records out 00:05:56.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158823 s, 66.0 MB/s 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.188 00:00:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@41 -- # break 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.446 00:00:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@41 -- # break 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.705 00:00:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@65 -- # true 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.963 00:00:11 -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.963 00:00:11 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.222 00:00:11 -- event/event.sh@35 -- # sleep 3 00:05:57.222 [2024-11-28 00:00:11.662190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.222 [2024-11-28 00:00:11.688076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.222 [2024-11-28 00:00:11.688167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.222 [2024-11-28 00:00:11.716664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.222 [2024-11-28 00:00:11.716714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.507 spdk_app_start Round 2 00:06:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.507 00:00:14 -- event/event.sh@23 -- # for i in {0..2} 00:06:00.507 00:00:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.507 00:00:14 -- event/event.sh@25 -- # waitforlisten 69407 /var/tmp/spdk-nbd.sock 00:06:00.507 00:00:14 -- common/autotest_common.sh@829 -- # '[' -z 69407 ']' 00:06:00.507 00:00:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.507 00:00:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.507 00:00:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.507 00:00:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.507 00:00:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.507 00:00:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.507 00:00:14 -- common/autotest_common.sh@862 -- # return 0 00:06:00.507 00:00:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.507 Malloc0 00:06:00.507 00:00:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.766 Malloc1 00:06:00.766 00:00:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@12 -- # local i 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.766 00:00:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.766 /dev/nbd0 00:06:01.023 00:00:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.023 00:00:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.023 00:00:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.023 00:00:15 -- common/autotest_common.sh@867 -- # local i 00:06:01.023 00:00:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.023 00:00:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.023 00:00:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.023 00:00:15 -- common/autotest_common.sh@871 -- # break 00:06:01.023 00:00:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.023 00:00:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.023 00:00:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.023 1+0 records in 00:06:01.023 1+0 records out 00:06:01.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218399 s, 18.8 MB/s 00:06:01.023 00:00:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.023 00:00:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.024 00:00:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.024 00:00:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.024 00:00:15 -- common/autotest_common.sh@887 -- # return 0 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.024 /dev/nbd1 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.024 00:00:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.024 00:00:15 -- common/autotest_common.sh@867 -- # local i 00:06:01.024 00:00:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.024 00:00:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.024 00:00:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.024 00:00:15 -- common/autotest_common.sh@871 -- # break 00:06:01.024 00:00:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.024 00:00:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.024 00:00:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.024 1+0 records in 00:06:01.024 1+0 records out 00:06:01.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243183 s, 16.8 MB/s 00:06:01.024 00:00:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.024 00:00:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:01.024 00:00:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.024 00:00:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.024 00:00:15 -- common/autotest_common.sh@887 -- # return 0 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.024 00:00:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.283 { 00:06:01.283 "nbd_device": "/dev/nbd0", 00:06:01.283 "bdev_name": "Malloc0" 00:06:01.283 }, 00:06:01.283 { 00:06:01.283 "nbd_device": "/dev/nbd1", 00:06:01.283 "bdev_name": "Malloc1" 00:06:01.283 } 00:06:01.283 ]' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.283 { 00:06:01.283 "nbd_device": "/dev/nbd0", 00:06:01.283 "bdev_name": "Malloc0" 00:06:01.283 }, 00:06:01.283 { 00:06:01.283 "nbd_device": "/dev/nbd1", 00:06:01.283 "bdev_name": "Malloc1" 00:06:01.283 } 00:06:01.283 ]' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.283 /dev/nbd1' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.283 /dev/nbd1' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.283 256+0 records in 00:06:01.283 256+0 records out 00:06:01.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128389 s, 81.7 MB/s 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.283 00:00:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.283 256+0 records in 00:06:01.283 256+0 records out 00:06:01.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124071 s, 84.5 MB/s 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.543 256+0 records in 00:06:01.543 256+0 records out 00:06:01.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157306 s, 66.7 MB/s 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.543 00:00:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@41 -- # break 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.543 00:00:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.801 00:00:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.801 00:00:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@41 -- # break 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.802 00:00:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@65 -- # true 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.060 00:00:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.060 00:00:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.318 00:00:16 -- event/event.sh@35 -- # sleep 3 00:06:02.318 [2024-11-28 00:00:16.828694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.319 [2024-11-28 00:00:16.855200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.319 [2024-11-28 00:00:16.855201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.319 [2024-11-28 00:00:16.884031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.319 [2024-11-28 00:00:16.884075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.708 00:00:19 -- event/event.sh@38 -- # waitforlisten 69407 /var/tmp/spdk-nbd.sock 00:06:05.708 00:00:19 -- common/autotest_common.sh@829 -- # '[' -z 69407 ']' 00:06:05.708 00:00:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.708 00:00:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.708 00:00:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.708 00:00:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.708 00:00:19 -- common/autotest_common.sh@10 -- # set +x 00:06:05.708 00:00:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.708 00:00:19 -- common/autotest_common.sh@862 -- # return 0 00:06:05.708 00:00:19 -- event/event.sh@39 -- # killprocess 69407 00:06:05.708 00:00:19 -- common/autotest_common.sh@936 -- # '[' -z 69407 ']' 00:06:05.708 00:00:19 -- common/autotest_common.sh@940 -- # kill -0 69407 00:06:05.708 00:00:19 -- common/autotest_common.sh@941 -- # uname 00:06:05.708 00:00:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.708 00:00:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69407 00:06:05.708 00:00:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.708 00:00:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.708 killing process with pid 69407 00:06:05.708 00:00:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69407' 00:06:05.708 00:00:19 -- common/autotest_common.sh@955 -- # kill 69407 00:06:05.708 00:00:19 -- common/autotest_common.sh@960 -- # wait 69407 00:06:05.708 spdk_app_start is called in Round 0. 00:06:05.708 Shutdown signal received, stop current app iteration 00:06:05.708 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:05.708 spdk_app_start is called in Round 1. 00:06:05.708 Shutdown signal received, stop current app iteration 00:06:05.708 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:05.708 spdk_app_start is called in Round 2. 00:06:05.708 Shutdown signal received, stop current app iteration 00:06:05.708 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:05.708 spdk_app_start is called in Round 3. 00:06:05.708 Shutdown signal received, stop current app iteration 00:06:05.708 00:00:20 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.708 ************************************ 00:06:05.708 END TEST app_repeat 00:06:05.708 ************************************ 00:06:05.708 00:00:20 -- event/event.sh@42 -- # return 0 00:06:05.708 00:06:05.708 real 0m16.433s 00:06:05.708 user 0m36.473s 00:06:05.708 sys 0m1.974s 00:06:05.708 00:00:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.708 00:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.708 00:00:20 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.708 00:00:20 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.708 00:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.708 00:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.708 00:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.708 ************************************ 00:06:05.708 START TEST cpu_locks 00:06:05.708 ************************************ 00:06:05.708 00:00:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.708 * Looking for test storage... 00:06:05.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.708 00:00:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.708 00:00:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.708 00:00:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.708 00:00:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.708 00:00:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.708 00:00:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.708 00:00:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.708 00:00:20 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.708 00:00:20 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.708 00:00:20 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.708 00:00:20 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.708 00:00:20 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.708 00:00:20 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.708 00:00:20 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.708 00:00:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.708 00:00:20 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.708 00:00:20 -- scripts/common.sh@344 -- # : 1 00:06:05.708 00:00:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.708 00:00:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.708 00:00:20 -- scripts/common.sh@364 -- # decimal 1 00:06:05.708 00:00:20 -- scripts/common.sh@352 -- # local d=1 00:06:05.708 00:00:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.708 00:00:20 -- scripts/common.sh@354 -- # echo 1 00:06:05.709 00:00:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.709 00:00:20 -- scripts/common.sh@365 -- # decimal 2 00:06:05.709 00:00:20 -- scripts/common.sh@352 -- # local d=2 00:06:05.709 00:00:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.709 00:00:20 -- scripts/common.sh@354 -- # echo 2 00:06:05.709 00:00:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.709 00:00:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.709 00:00:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.709 00:00:20 -- scripts/common.sh@367 -- # return 0 00:06:05.709 00:00:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.709 00:00:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.709 --rc genhtml_branch_coverage=1 00:06:05.709 --rc genhtml_function_coverage=1 00:06:05.709 --rc genhtml_legend=1 00:06:05.709 --rc geninfo_all_blocks=1 00:06:05.709 --rc geninfo_unexecuted_blocks=1 00:06:05.709 00:06:05.709 ' 00:06:05.709 00:00:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.709 --rc genhtml_branch_coverage=1 00:06:05.709 --rc genhtml_function_coverage=1 00:06:05.709 --rc genhtml_legend=1 00:06:05.709 --rc geninfo_all_blocks=1 00:06:05.709 --rc geninfo_unexecuted_blocks=1 00:06:05.709 00:06:05.709 ' 00:06:05.709 00:00:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.709 --rc genhtml_branch_coverage=1 00:06:05.709 --rc genhtml_function_coverage=1 00:06:05.709 --rc genhtml_legend=1 00:06:05.709 --rc geninfo_all_blocks=1 00:06:05.709 --rc geninfo_unexecuted_blocks=1 00:06:05.709 00:06:05.709 ' 00:06:05.709 00:00:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.709 --rc genhtml_branch_coverage=1 00:06:05.709 --rc genhtml_function_coverage=1 00:06:05.709 --rc genhtml_legend=1 00:06:05.709 --rc geninfo_all_blocks=1 00:06:05.709 --rc geninfo_unexecuted_blocks=1 00:06:05.709 00:06:05.709 ' 00:06:05.709 00:00:20 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.709 00:00:20 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.709 00:00:20 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.709 00:00:20 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.709 00:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.709 00:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.709 00:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.709 ************************************ 00:06:05.709 START TEST default_locks 00:06:05.709 ************************************ 00:06:05.709 00:00:20 -- common/autotest_common.sh@1114 -- # default_locks 00:06:05.709 00:00:20 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69834 00:06:05.709 00:00:20 -- event/cpu_locks.sh@47 -- # waitforlisten 69834 00:06:05.709 00:00:20 -- common/autotest_common.sh@829 -- # '[' -z 69834 ']' 00:06:05.709 00:00:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.709 00:00:20 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.709 00:00:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.709 00:00:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.709 00:00:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.709 00:00:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.966 [2024-11-28 00:00:20.345550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.966 [2024-11-28 00:00:20.345787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69834 ] 00:06:05.966 [2024-11-28 00:00:20.493778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.966 [2024-11-28 00:00:20.521214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.966 [2024-11-28 00:00:20.521521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.897 00:00:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.897 00:00:21 -- common/autotest_common.sh@862 -- # return 0 00:06:06.897 00:00:21 -- event/cpu_locks.sh@49 -- # locks_exist 69834 00:06:06.897 00:00:21 -- event/cpu_locks.sh@22 -- # lslocks -p 69834 00:06:06.897 00:00:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.897 00:00:21 -- event/cpu_locks.sh@50 -- # killprocess 69834 00:06:06.897 00:00:21 -- common/autotest_common.sh@936 -- # '[' -z 69834 ']' 00:06:06.897 00:00:21 -- common/autotest_common.sh@940 -- # kill -0 69834 00:06:06.897 00:00:21 -- common/autotest_common.sh@941 -- # uname 00:06:06.897 00:00:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.897 00:00:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69834 00:06:06.897 00:00:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.897 00:00:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.897 00:00:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69834' 00:06:06.897 killing process with pid 69834 00:06:06.897 00:00:21 -- common/autotest_common.sh@955 -- # kill 69834 00:06:06.897 00:00:21 -- common/autotest_common.sh@960 -- # wait 69834 00:06:07.155 00:00:21 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69834 00:06:07.155 00:00:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.155 00:00:21 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69834 00:06:07.155 00:00:21 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.155 00:00:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.155 00:00:21 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.155 00:00:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.155 00:00:21 -- common/autotest_common.sh@653 -- # waitforlisten 69834 00:06:07.155 00:00:21 -- common/autotest_common.sh@829 -- # '[' -z 69834 ']' 00:06:07.155 00:00:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.155 00:00:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.155 00:00:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.155 00:00:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.155 ERROR: process (pid: 69834) is no longer running 00:06:07.155 00:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69834) - No such process 00:06:07.155 00:00:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.155 00:00:21 -- common/autotest_common.sh@862 -- # return 1 00:06:07.155 00:00:21 -- common/autotest_common.sh@653 -- # es=1 00:06:07.155 00:00:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.155 00:00:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.155 00:00:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.155 00:00:21 -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.155 00:00:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.155 00:00:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.155 00:00:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.155 00:06:07.155 real 0m1.309s 00:06:07.155 user 0m1.341s 00:06:07.155 sys 0m0.370s 00:06:07.155 00:00:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.155 00:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 ************************************ 00:06:07.155 END TEST default_locks 00:06:07.155 ************************************ 00:06:07.155 00:00:21 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.155 00:00:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.155 00:00:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.155 00:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 ************************************ 00:06:07.155 START TEST default_locks_via_rpc 00:06:07.155 ************************************ 00:06:07.155 00:00:21 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:07.155 00:00:21 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69876 00:06:07.155 00:00:21 -- event/cpu_locks.sh@63 -- # waitforlisten 69876 00:06:07.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.155 00:00:21 -- common/autotest_common.sh@829 -- # '[' -z 69876 ']' 00:06:07.155 00:00:21 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.155 00:00:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.155 00:00:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.155 00:00:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.155 00:00:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.155 00:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.155 [2024-11-28 00:00:21.689047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.155 [2024-11-28 00:00:21.689276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69876 ] 00:06:07.412 [2024-11-28 00:00:21.834063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.412 [2024-11-28 00:00:21.861641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.412 [2024-11-28 00:00:21.861801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.978 00:00:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.978 00:00:22 -- common/autotest_common.sh@862 -- # return 0 00:06:07.978 00:00:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.978 00:00:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.978 00:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:07.978 00:00:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.978 00:00:22 -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.978 00:00:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.978 00:00:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.978 00:00:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.978 00:00:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.978 00:00:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.978 00:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:07.978 00:00:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.978 00:00:22 -- event/cpu_locks.sh@71 -- # locks_exist 69876 00:06:07.978 00:00:22 -- event/cpu_locks.sh@22 -- # lslocks -p 69876 00:06:07.978 00:00:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.237 00:00:22 -- event/cpu_locks.sh@73 -- # killprocess 69876 00:06:08.237 00:00:22 -- common/autotest_common.sh@936 -- # '[' -z 69876 ']' 00:06:08.237 00:00:22 -- common/autotest_common.sh@940 -- # kill -0 69876 00:06:08.237 00:00:22 -- common/autotest_common.sh@941 -- # uname 00:06:08.237 00:00:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.237 00:00:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69876 00:06:08.237 killing process with pid 69876 00:06:08.237 00:00:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.237 00:00:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.237 00:00:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69876' 00:06:08.237 00:00:22 -- common/autotest_common.sh@955 -- # kill 69876 00:06:08.237 00:00:22 -- common/autotest_common.sh@960 -- # wait 69876 00:06:08.495 ************************************ 00:06:08.495 END TEST default_locks_via_rpc 00:06:08.495 ************************************ 00:06:08.495 00:06:08.495 real 0m1.309s 00:06:08.495 user 0m1.325s 00:06:08.495 sys 0m0.384s 00:06:08.495 00:00:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.495 00:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.495 00:00:22 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.495 00:00:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.495 00:00:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.495 00:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.495 ************************************ 00:06:08.495 START TEST non_locking_app_on_locked_coremask 00:06:08.495 ************************************ 00:06:08.495 00:00:22 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:08.495 00:00:22 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69917 00:06:08.495 00:00:22 -- event/cpu_locks.sh@81 -- # waitforlisten 69917 /var/tmp/spdk.sock 00:06:08.495 00:00:22 -- common/autotest_common.sh@829 -- # '[' -z 69917 ']' 00:06:08.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.495 00:00:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.495 00:00:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.495 00:00:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.495 00:00:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.495 00:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.495 00:00:22 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.495 [2024-11-28 00:00:23.035320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.495 [2024-11-28 00:00:23.035916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69917 ] 00:06:08.754 [2024-11-28 00:00:23.181737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.754 [2024-11-28 00:00:23.208806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.754 [2024-11-28 00:00:23.209133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.319 00:00:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.319 00:00:23 -- common/autotest_common.sh@862 -- # return 0 00:06:09.319 00:00:23 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.319 00:00:23 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69933 00:06:09.319 00:00:23 -- event/cpu_locks.sh@85 -- # waitforlisten 69933 /var/tmp/spdk2.sock 00:06:09.319 00:00:23 -- common/autotest_common.sh@829 -- # '[' -z 69933 ']' 00:06:09.319 00:00:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.319 00:00:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.319 00:00:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.319 00:00:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.319 00:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.319 [2024-11-28 00:00:23.907956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.319 [2024-11-28 00:00:23.908226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69933 ] 00:06:09.576 [2024-11-28 00:00:24.054120] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.576 [2024-11-28 00:00:24.054161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.577 [2024-11-28 00:00:24.109447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.577 [2024-11-28 00:00:24.109609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.143 00:00:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.143 00:00:24 -- common/autotest_common.sh@862 -- # return 0 00:06:10.143 00:00:24 -- event/cpu_locks.sh@87 -- # locks_exist 69917 00:06:10.143 00:00:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.143 00:00:24 -- event/cpu_locks.sh@22 -- # lslocks -p 69917 00:06:10.716 00:00:25 -- event/cpu_locks.sh@89 -- # killprocess 69917 00:06:10.716 00:00:25 -- common/autotest_common.sh@936 -- # '[' -z 69917 ']' 00:06:10.716 00:00:25 -- common/autotest_common.sh@940 -- # kill -0 69917 00:06:10.716 00:00:25 -- common/autotest_common.sh@941 -- # uname 00:06:10.716 00:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.716 00:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69917 00:06:10.716 00:00:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.716 00:00:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.716 killing process with pid 69917 00:06:10.716 00:00:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69917' 00:06:10.716 00:00:25 -- common/autotest_common.sh@955 -- # kill 69917 00:06:10.716 00:00:25 -- common/autotest_common.sh@960 -- # wait 69917 00:06:10.983 00:00:25 -- event/cpu_locks.sh@90 -- # killprocess 69933 00:06:10.983 00:00:25 -- common/autotest_common.sh@936 -- # '[' -z 69933 ']' 00:06:10.983 00:00:25 -- common/autotest_common.sh@940 -- # kill -0 69933 00:06:10.983 00:00:25 -- common/autotest_common.sh@941 -- # uname 00:06:10.983 00:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.983 00:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69933 00:06:10.983 killing process with pid 69933 00:06:10.983 00:00:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.983 00:00:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.983 00:00:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69933' 00:06:10.983 00:00:25 -- common/autotest_common.sh@955 -- # kill 69933 00:06:10.983 00:00:25 -- common/autotest_common.sh@960 -- # wait 69933 00:06:11.291 00:06:11.291 real 0m2.739s 00:06:11.291 user 0m3.024s 00:06:11.291 sys 0m0.717s 00:06:11.291 00:00:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.291 00:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 ************************************ 00:06:11.291 END TEST non_locking_app_on_locked_coremask 00:06:11.291 ************************************ 00:06:11.291 00:00:25 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.291 00:00:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.291 00:00:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.291 00:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 ************************************ 00:06:11.291 START TEST locking_app_on_unlocked_coremask 00:06:11.291 ************************************ 00:06:11.291 00:00:25 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:11.291 00:00:25 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69991 00:06:11.291 00:00:25 -- event/cpu_locks.sh@99 -- # waitforlisten 69991 /var/tmp/spdk.sock 00:06:11.291 00:00:25 -- common/autotest_common.sh@829 -- # '[' -z 69991 ']' 00:06:11.291 00:00:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.291 00:00:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.291 00:00:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.291 00:00:25 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.291 00:00:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.291 00:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 [2024-11-28 00:00:25.813692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.291 [2024-11-28 00:00:25.813806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69991 ] 00:06:11.549 [2024-11-28 00:00:25.958205] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.549 [2024-11-28 00:00:25.958247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.549 [2024-11-28 00:00:25.985399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.549 [2024-11-28 00:00:25.985712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.116 00:00:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.116 00:00:26 -- common/autotest_common.sh@862 -- # return 0 00:06:12.116 00:00:26 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70007 00:06:12.116 00:00:26 -- event/cpu_locks.sh@103 -- # waitforlisten 70007 /var/tmp/spdk2.sock 00:06:12.116 00:00:26 -- common/autotest_common.sh@829 -- # '[' -z 70007 ']' 00:06:12.116 00:00:26 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.116 00:00:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.116 00:00:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.116 00:00:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.116 00:00:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.116 00:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.116 [2024-11-28 00:00:26.697494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.116 [2024-11-28 00:00:26.697801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70007 ] 00:06:12.375 [2024-11-28 00:00:26.843273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.375 [2024-11-28 00:00:26.898887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.375 [2024-11-28 00:00:26.899058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.942 00:00:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.943 00:00:27 -- common/autotest_common.sh@862 -- # return 0 00:06:12.943 00:00:27 -- event/cpu_locks.sh@105 -- # locks_exist 70007 00:06:12.943 00:00:27 -- event/cpu_locks.sh@22 -- # lslocks -p 70007 00:06:12.943 00:00:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.200 00:00:27 -- event/cpu_locks.sh@107 -- # killprocess 69991 00:06:13.200 00:00:27 -- common/autotest_common.sh@936 -- # '[' -z 69991 ']' 00:06:13.200 00:00:27 -- common/autotest_common.sh@940 -- # kill -0 69991 00:06:13.200 00:00:27 -- common/autotest_common.sh@941 -- # uname 00:06:13.200 00:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.200 00:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69991 00:06:13.459 killing process with pid 69991 00:06:13.459 00:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.459 00:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.459 00:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69991' 00:06:13.459 00:00:27 -- common/autotest_common.sh@955 -- # kill 69991 00:06:13.459 00:00:27 -- common/autotest_common.sh@960 -- # wait 69991 00:06:13.717 00:00:28 -- event/cpu_locks.sh@108 -- # killprocess 70007 00:06:13.717 00:00:28 -- common/autotest_common.sh@936 -- # '[' -z 70007 ']' 00:06:13.717 00:00:28 -- common/autotest_common.sh@940 -- # kill -0 70007 00:06:13.717 00:00:28 -- common/autotest_common.sh@941 -- # uname 00:06:13.717 00:00:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.717 00:00:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70007 00:06:13.717 killing process with pid 70007 00:06:13.717 00:00:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.717 00:00:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.717 00:00:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70007' 00:06:13.717 00:00:28 -- common/autotest_common.sh@955 -- # kill 70007 00:06:13.717 00:00:28 -- common/autotest_common.sh@960 -- # wait 70007 00:06:13.976 ************************************ 00:06:13.976 END TEST locking_app_on_unlocked_coremask 00:06:13.976 ************************************ 00:06:13.976 00:06:13.976 real 0m2.731s 00:06:13.976 user 0m3.042s 00:06:13.976 sys 0m0.699s 00:06:13.976 00:00:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.976 00:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.976 00:00:28 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.976 00:00:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.976 00:00:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.976 00:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.976 ************************************ 00:06:13.976 START TEST locking_app_on_locked_coremask 00:06:13.976 ************************************ 00:06:13.976 00:00:28 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:13.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.976 00:00:28 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70064 00:06:13.976 00:00:28 -- event/cpu_locks.sh@116 -- # waitforlisten 70064 /var/tmp/spdk.sock 00:06:13.976 00:00:28 -- common/autotest_common.sh@829 -- # '[' -z 70064 ']' 00:06:13.976 00:00:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.976 00:00:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.976 00:00:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.976 00:00:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.976 00:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.976 00:00:28 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.235 [2024-11-28 00:00:28.587348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.235 [2024-11-28 00:00:28.587665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70064 ] 00:06:14.235 [2024-11-28 00:00:28.733220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.235 [2024-11-28 00:00:28.760473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.235 [2024-11-28 00:00:28.760802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.802 00:00:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.802 00:00:29 -- common/autotest_common.sh@862 -- # return 0 00:06:14.802 00:00:29 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70076 00:06:14.802 00:00:29 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.802 00:00:29 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70076 /var/tmp/spdk2.sock 00:06:14.802 00:00:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.802 00:00:29 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70076 /var/tmp/spdk2.sock 00:06:14.802 00:00:29 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.802 00:00:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.802 00:00:29 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.060 00:00:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.060 00:00:29 -- common/autotest_common.sh@653 -- # waitforlisten 70076 /var/tmp/spdk2.sock 00:06:15.060 00:00:29 -- common/autotest_common.sh@829 -- # '[' -z 70076 ']' 00:06:15.060 00:00:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.060 00:00:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.060 00:00:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.060 00:00:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.060 00:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.060 [2024-11-28 00:00:29.461845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.060 [2024-11-28 00:00:29.462432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70076 ] 00:06:15.060 [2024-11-28 00:00:29.608248] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70064 has claimed it. 00:06:15.060 [2024-11-28 00:00:29.608303] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.626 ERROR: process (pid: 70076) is no longer running 00:06:15.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70076) - No such process 00:06:15.626 00:00:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.626 00:00:30 -- common/autotest_common.sh@862 -- # return 1 00:06:15.626 00:00:30 -- common/autotest_common.sh@653 -- # es=1 00:06:15.626 00:00:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.626 00:00:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.626 00:00:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.626 00:00:30 -- event/cpu_locks.sh@122 -- # locks_exist 70064 00:06:15.626 00:00:30 -- event/cpu_locks.sh@22 -- # lslocks -p 70064 00:06:15.626 00:00:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.883 00:00:30 -- event/cpu_locks.sh@124 -- # killprocess 70064 00:06:15.883 00:00:30 -- common/autotest_common.sh@936 -- # '[' -z 70064 ']' 00:06:15.883 00:00:30 -- common/autotest_common.sh@940 -- # kill -0 70064 00:06:15.883 00:00:30 -- common/autotest_common.sh@941 -- # uname 00:06:15.883 00:00:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.883 00:00:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70064 00:06:15.883 killing process with pid 70064 00:06:15.883 00:00:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.883 00:00:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.883 00:00:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70064' 00:06:15.883 00:00:30 -- common/autotest_common.sh@955 -- # kill 70064 00:06:15.883 00:00:30 -- common/autotest_common.sh@960 -- # wait 70064 00:06:16.143 ************************************ 00:06:16.143 END TEST locking_app_on_locked_coremask 00:06:16.143 ************************************ 00:06:16.143 00:06:16.143 real 0m1.993s 00:06:16.143 user 0m2.233s 00:06:16.143 sys 0m0.454s 00:06:16.143 00:00:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.143 00:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 00:00:30 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.143 00:00:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.143 00:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.143 00:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 ************************************ 00:06:16.143 START TEST locking_overlapped_coremask 00:06:16.143 ************************************ 00:06:16.143 00:00:30 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:16.143 00:00:30 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70118 00:06:16.143 00:00:30 -- event/cpu_locks.sh@133 -- # waitforlisten 70118 /var/tmp/spdk.sock 00:06:16.143 00:00:30 -- common/autotest_common.sh@829 -- # '[' -z 70118 ']' 00:06:16.143 00:00:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.143 00:00:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.143 00:00:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.143 00:00:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.143 00:00:30 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.143 00:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 [2024-11-28 00:00:30.627398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.143 [2024-11-28 00:00:30.627508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70118 ] 00:06:16.401 [2024-11-28 00:00:30.772446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.402 [2024-11-28 00:00:30.801462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.402 [2024-11-28 00:00:30.801859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.402 [2024-11-28 00:00:30.802083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.402 [2024-11-28 00:00:30.802117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.970 00:00:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.970 00:00:31 -- common/autotest_common.sh@862 -- # return 0 00:06:16.970 00:00:31 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70136 00:06:16.970 00:00:31 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70136 /var/tmp/spdk2.sock 00:06:16.970 00:00:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.970 00:00:31 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70136 /var/tmp/spdk2.sock 00:06:16.970 00:00:31 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.970 00:00:31 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.970 00:00:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.970 00:00:31 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.970 00:00:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.970 00:00:31 -- common/autotest_common.sh@653 -- # waitforlisten 70136 /var/tmp/spdk2.sock 00:06:16.970 00:00:31 -- common/autotest_common.sh@829 -- # '[' -z 70136 ']' 00:06:16.970 00:00:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.970 00:00:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.970 00:00:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.970 00:00:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.970 00:00:31 -- common/autotest_common.sh@10 -- # set +x 00:06:16.970 [2024-11-28 00:00:31.505559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.970 [2024-11-28 00:00:31.505870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70136 ] 00:06:17.228 [2024-11-28 00:00:31.657804] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70118 has claimed it. 00:06:17.228 [2024-11-28 00:00:31.657868] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.832 ERROR: process (pid: 70136) is no longer running 00:06:17.832 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70136) - No such process 00:06:17.832 00:00:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.832 00:00:32 -- common/autotest_common.sh@862 -- # return 1 00:06:17.832 00:00:32 -- common/autotest_common.sh@653 -- # es=1 00:06:17.832 00:00:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.832 00:00:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.832 00:00:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.832 00:00:32 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.832 00:00:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.832 00:00:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.832 00:00:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.832 00:00:32 -- event/cpu_locks.sh@141 -- # killprocess 70118 00:06:17.832 00:00:32 -- common/autotest_common.sh@936 -- # '[' -z 70118 ']' 00:06:17.832 00:00:32 -- common/autotest_common.sh@940 -- # kill -0 70118 00:06:17.832 00:00:32 -- common/autotest_common.sh@941 -- # uname 00:06:17.832 00:00:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.832 00:00:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70118 00:06:17.832 00:00:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.832 00:00:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.832 00:00:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70118' 00:06:17.832 killing process with pid 70118 00:06:17.832 00:00:32 -- common/autotest_common.sh@955 -- # kill 70118 00:06:17.832 00:00:32 -- common/autotest_common.sh@960 -- # wait 70118 00:06:17.832 00:06:17.832 real 0m1.826s 00:06:17.832 user 0m5.028s 00:06:17.832 sys 0m0.378s 00:06:17.832 00:00:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.833 00:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.833 ************************************ 00:06:17.833 END TEST locking_overlapped_coremask 00:06:17.833 ************************************ 00:06:17.833 00:00:32 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.833 00:00:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.833 00:00:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.833 00:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.833 ************************************ 00:06:17.833 START TEST locking_overlapped_coremask_via_rpc 00:06:17.833 ************************************ 00:06:17.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.833 00:00:32 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:17.833 00:00:32 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70178 00:06:17.833 00:00:32 -- event/cpu_locks.sh@149 -- # waitforlisten 70178 /var/tmp/spdk.sock 00:06:17.833 00:00:32 -- common/autotest_common.sh@829 -- # '[' -z 70178 ']' 00:06:17.833 00:00:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.833 00:00:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.833 00:00:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.833 00:00:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.833 00:00:32 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.833 00:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.091 [2024-11-28 00:00:32.494323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.091 [2024-11-28 00:00:32.494610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70178 ] 00:06:18.091 [2024-11-28 00:00:32.638475] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.091 [2024-11-28 00:00:32.638522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.091 [2024-11-28 00:00:32.670574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.091 [2024-11-28 00:00:32.670901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.091 [2024-11-28 00:00:32.671248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.091 [2024-11-28 00:00:32.671283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.025 00:00:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.025 00:00:33 -- common/autotest_common.sh@862 -- # return 0 00:06:19.025 00:00:33 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70190 00:06:19.025 00:00:33 -- event/cpu_locks.sh@153 -- # waitforlisten 70190 /var/tmp/spdk2.sock 00:06:19.025 00:00:33 -- common/autotest_common.sh@829 -- # '[' -z 70190 ']' 00:06:19.025 00:00:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.025 00:00:33 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.025 00:00:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.025 00:00:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.025 00:00:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.025 00:00:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.025 [2024-11-28 00:00:33.328633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.025 [2024-11-28 00:00:33.328873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70190 ] 00:06:19.025 [2024-11-28 00:00:33.481207] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.025 [2024-11-28 00:00:33.481259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.025 [2024-11-28 00:00:33.539973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.025 [2024-11-28 00:00:33.540276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.025 [2024-11-28 00:00:33.543431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.025 [2024-11-28 00:00:33.543466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.592 00:00:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.592 00:00:34 -- common/autotest_common.sh@862 -- # return 0 00:06:19.592 00:00:34 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.592 00:00:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.592 00:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.592 00:00:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.592 00:00:34 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.592 00:00:34 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.592 00:00:34 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.592 00:00:34 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:19.592 00:00:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.592 00:00:34 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:19.592 00:00:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.592 00:00:34 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.592 00:00:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.592 00:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.593 [2024-11-28 00:00:34.160483] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70178 has claimed it. 00:06:19.593 request: 00:06:19.593 { 00:06:19.593 "method": "framework_enable_cpumask_locks", 00:06:19.593 "req_id": 1 00:06:19.593 } 00:06:19.593 Got JSON-RPC error response 00:06:19.593 response: 00:06:19.593 { 00:06:19.593 "code": -32603, 00:06:19.593 "message": "Failed to claim CPU core: 2" 00:06:19.593 } 00:06:19.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.593 00:00:34 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:19.593 00:00:34 -- common/autotest_common.sh@653 -- # es=1 00:06:19.593 00:00:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.593 00:00:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.593 00:00:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.593 00:00:34 -- event/cpu_locks.sh@158 -- # waitforlisten 70178 /var/tmp/spdk.sock 00:06:19.593 00:00:34 -- common/autotest_common.sh@829 -- # '[' -z 70178 ']' 00:06:19.593 00:00:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.593 00:00:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.593 00:00:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.593 00:00:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.593 00:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.851 00:00:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.851 00:00:34 -- common/autotest_common.sh@862 -- # return 0 00:06:19.851 00:00:34 -- event/cpu_locks.sh@159 -- # waitforlisten 70190 /var/tmp/spdk2.sock 00:06:19.851 00:00:34 -- common/autotest_common.sh@829 -- # '[' -z 70190 ']' 00:06:19.851 00:00:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.851 00:00:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.851 00:00:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.851 00:00:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.851 00:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.109 ************************************ 00:06:20.109 END TEST locking_overlapped_coremask_via_rpc 00:06:20.109 ************************************ 00:06:20.109 00:00:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.109 00:00:34 -- common/autotest_common.sh@862 -- # return 0 00:06:20.109 00:00:34 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.109 00:00:34 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.109 00:00:34 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.110 00:00:34 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.110 00:06:20.110 real 0m2.132s 00:06:20.110 user 0m0.935s 00:06:20.110 sys 0m0.135s 00:06:20.110 00:00:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.110 00:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.110 00:00:34 -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.110 00:00:34 -- event/cpu_locks.sh@15 -- # [[ -z 70178 ]] 00:06:20.110 00:00:34 -- event/cpu_locks.sh@15 -- # killprocess 70178 00:06:20.110 00:00:34 -- common/autotest_common.sh@936 -- # '[' -z 70178 ']' 00:06:20.110 00:00:34 -- common/autotest_common.sh@940 -- # kill -0 70178 00:06:20.110 00:00:34 -- common/autotest_common.sh@941 -- # uname 00:06:20.110 00:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.110 00:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70178 00:06:20.110 killing process with pid 70178 00:06:20.110 00:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.110 00:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.110 00:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70178' 00:06:20.110 00:00:34 -- common/autotest_common.sh@955 -- # kill 70178 00:06:20.110 00:00:34 -- common/autotest_common.sh@960 -- # wait 70178 00:06:20.368 00:00:34 -- event/cpu_locks.sh@16 -- # [[ -z 70190 ]] 00:06:20.368 00:00:34 -- event/cpu_locks.sh@16 -- # killprocess 70190 00:06:20.368 00:00:34 -- common/autotest_common.sh@936 -- # '[' -z 70190 ']' 00:06:20.368 00:00:34 -- common/autotest_common.sh@940 -- # kill -0 70190 00:06:20.368 00:00:34 -- common/autotest_common.sh@941 -- # uname 00:06:20.368 00:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.368 00:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70190 00:06:20.369 killing process with pid 70190 00:06:20.369 00:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:20.369 00:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:20.369 00:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70190' 00:06:20.369 00:00:34 -- common/autotest_common.sh@955 -- # kill 70190 00:06:20.369 00:00:34 -- common/autotest_common.sh@960 -- # wait 70190 00:06:20.628 00:00:35 -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.628 00:00:35 -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.628 00:00:35 -- event/cpu_locks.sh@15 -- # [[ -z 70178 ]] 00:06:20.628 00:00:35 -- event/cpu_locks.sh@15 -- # killprocess 70178 00:06:20.628 00:00:35 -- common/autotest_common.sh@936 -- # '[' -z 70178 ']' 00:06:20.628 00:00:35 -- common/autotest_common.sh@940 -- # kill -0 70178 00:06:20.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70178) - No such process 00:06:20.628 Process with pid 70178 is not found 00:06:20.628 00:00:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70178 is not found' 00:06:20.628 Process with pid 70190 is not found 00:06:20.628 00:00:35 -- event/cpu_locks.sh@16 -- # [[ -z 70190 ]] 00:06:20.628 00:00:35 -- event/cpu_locks.sh@16 -- # killprocess 70190 00:06:20.628 00:00:35 -- common/autotest_common.sh@936 -- # '[' -z 70190 ']' 00:06:20.628 00:00:35 -- common/autotest_common.sh@940 -- # kill -0 70190 00:06:20.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70190) - No such process 00:06:20.628 00:00:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70190 is not found' 00:06:20.628 00:00:35 -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.628 ************************************ 00:06:20.628 END TEST cpu_locks 00:06:20.628 ************************************ 00:06:20.628 00:06:20.628 real 0m14.981s 00:06:20.628 user 0m26.520s 00:06:20.628 sys 0m3.830s 00:06:20.628 00:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.628 00:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.628 ************************************ 00:06:20.628 END TEST event 00:06:20.628 ************************************ 00:06:20.628 00:06:20.628 real 0m37.898s 00:06:20.628 user 1m13.864s 00:06:20.628 sys 0m6.540s 00:06:20.628 00:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.628 00:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.628 00:00:35 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.628 00:00:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.628 00:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.628 00:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.628 ************************************ 00:06:20.628 START TEST thread 00:06:20.628 ************************************ 00:06:20.628 00:00:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:20.885 * Looking for test storage... 00:06:20.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:20.885 00:00:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:20.885 00:00:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:20.885 00:00:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:20.885 00:00:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:20.885 00:00:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:20.886 00:00:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:20.886 00:00:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:20.886 00:00:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:20.886 00:00:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:20.886 00:00:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.886 00:00:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:20.886 00:00:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:20.886 00:00:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:20.886 00:00:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:20.886 00:00:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:20.886 00:00:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:20.886 00:00:35 -- scripts/common.sh@344 -- # : 1 00:06:20.886 00:00:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:20.886 00:00:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.886 00:00:35 -- scripts/common.sh@364 -- # decimal 1 00:06:20.886 00:00:35 -- scripts/common.sh@352 -- # local d=1 00:06:20.886 00:00:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.886 00:00:35 -- scripts/common.sh@354 -- # echo 1 00:06:20.886 00:00:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:20.886 00:00:35 -- scripts/common.sh@365 -- # decimal 2 00:06:20.886 00:00:35 -- scripts/common.sh@352 -- # local d=2 00:06:20.886 00:00:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.886 00:00:35 -- scripts/common.sh@354 -- # echo 2 00:06:20.886 00:00:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:20.886 00:00:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:20.886 00:00:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:20.886 00:00:35 -- scripts/common.sh@367 -- # return 0 00:06:20.886 00:00:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.886 00:00:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 00:00:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 00:00:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 00:00:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:20.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.886 --rc genhtml_branch_coverage=1 00:06:20.886 --rc genhtml_function_coverage=1 00:06:20.886 --rc genhtml_legend=1 00:06:20.886 --rc geninfo_all_blocks=1 00:06:20.886 --rc geninfo_unexecuted_blocks=1 00:06:20.886 00:06:20.886 ' 00:06:20.886 00:00:35 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.886 00:00:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:20.886 00:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.886 00:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.886 ************************************ 00:06:20.886 START TEST thread_poller_perf 00:06:20.886 ************************************ 00:06:20.886 00:00:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.886 [2024-11-28 00:00:35.335714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.886 [2024-11-28 00:00:35.336161] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:06:20.886 [2024-11-28 00:00:35.477624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.145 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:21.145 [2024-11-28 00:00:35.504796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.079 [2024-11-28T00:00:36.681Z] ====================================== 00:06:22.079 [2024-11-28T00:00:36.681Z] busy:2612683018 (cyc) 00:06:22.079 [2024-11-28T00:00:36.681Z] total_run_count: 383000 00:06:22.079 [2024-11-28T00:00:36.681Z] tsc_hz: 2600000000 (cyc) 00:06:22.079 [2024-11-28T00:00:36.681Z] ====================================== 00:06:22.079 [2024-11-28T00:00:36.681Z] poller_cost: 6821 (cyc), 2623 (nsec) 00:06:22.079 ************************************ 00:06:22.079 00:06:22.079 real 0m1.255s 00:06:22.079 user 0m1.097s 00:06:22.079 sys 0m0.051s 00:06:22.079 00:00:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.079 00:00:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.079 END TEST thread_poller_perf 00:06:22.079 ************************************ 00:06:22.079 00:00:36 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.079 00:00:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:22.079 00:00:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.079 00:00:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.079 ************************************ 00:06:22.079 START TEST thread_poller_perf 00:06:22.079 ************************************ 00:06:22.079 00:00:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.079 [2024-11-28 00:00:36.636899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.079 [2024-11-28 00:00:36.637005] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70353 ] 00:06:22.337 [2024-11-28 00:00:36.784116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.337 [2024-11-28 00:00:36.813484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.337 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.302 [2024-11-28T00:00:37.904Z] ====================================== 00:06:23.302 [2024-11-28T00:00:37.904Z] busy:2603984962 (cyc) 00:06:23.302 [2024-11-28T00:00:37.904Z] total_run_count: 3807000 00:06:23.302 [2024-11-28T00:00:37.904Z] tsc_hz: 2600000000 (cyc) 00:06:23.302 [2024-11-28T00:00:37.904Z] ====================================== 00:06:23.302 [2024-11-28T00:00:37.904Z] poller_cost: 683 (cyc), 262 (nsec) 00:06:23.302 00:06:23.302 real 0m1.260s 00:06:23.302 user 0m1.090s 00:06:23.302 sys 0m0.064s 00:06:23.302 ************************************ 00:06:23.302 END TEST thread_poller_perf 00:06:23.302 ************************************ 00:06:23.302 00:00:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.302 00:00:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.302 00:00:37 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:23.302 00:06:23.302 real 0m2.719s 00:06:23.302 user 0m2.278s 00:06:23.302 sys 0m0.227s 00:06:23.302 ************************************ 00:06:23.302 END TEST thread 00:06:23.302 ************************************ 00:06:23.302 00:00:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.302 00:00:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.561 00:00:37 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:23.561 00:00:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.561 00:00:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.561 00:00:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.561 ************************************ 00:06:23.561 START TEST accel 00:06:23.561 ************************************ 00:06:23.561 00:00:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:23.561 * Looking for test storage... 00:06:23.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:23.561 00:00:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:23.561 00:00:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:23.561 00:00:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:23.561 00:00:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:23.561 00:00:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:23.561 00:00:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:23.561 00:00:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:23.561 00:00:38 -- scripts/common.sh@335 -- # IFS=.-: 00:06:23.561 00:00:38 -- scripts/common.sh@335 -- # read -ra ver1 00:06:23.561 00:00:38 -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.561 00:00:38 -- scripts/common.sh@336 -- # read -ra ver2 00:06:23.561 00:00:38 -- scripts/common.sh@337 -- # local 'op=<' 00:06:23.561 00:00:38 -- scripts/common.sh@339 -- # ver1_l=2 00:06:23.561 00:00:38 -- scripts/common.sh@340 -- # ver2_l=1 00:06:23.561 00:00:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:23.561 00:00:38 -- scripts/common.sh@343 -- # case "$op" in 00:06:23.561 00:00:38 -- scripts/common.sh@344 -- # : 1 00:06:23.561 00:00:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:23.561 00:00:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.561 00:00:38 -- scripts/common.sh@364 -- # decimal 1 00:06:23.561 00:00:38 -- scripts/common.sh@352 -- # local d=1 00:06:23.561 00:00:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.561 00:00:38 -- scripts/common.sh@354 -- # echo 1 00:06:23.561 00:00:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:23.561 00:00:38 -- scripts/common.sh@365 -- # decimal 2 00:06:23.561 00:00:38 -- scripts/common.sh@352 -- # local d=2 00:06:23.561 00:00:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.561 00:00:38 -- scripts/common.sh@354 -- # echo 2 00:06:23.561 00:00:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:23.561 00:00:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:23.561 00:00:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:23.561 00:00:38 -- scripts/common.sh@367 -- # return 0 00:06:23.561 00:00:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.561 00:00:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:23.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.561 --rc genhtml_branch_coverage=1 00:06:23.561 --rc genhtml_function_coverage=1 00:06:23.561 --rc genhtml_legend=1 00:06:23.561 --rc geninfo_all_blocks=1 00:06:23.561 --rc geninfo_unexecuted_blocks=1 00:06:23.561 00:06:23.561 ' 00:06:23.561 00:00:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:23.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.561 --rc genhtml_branch_coverage=1 00:06:23.561 --rc genhtml_function_coverage=1 00:06:23.561 --rc genhtml_legend=1 00:06:23.561 --rc geninfo_all_blocks=1 00:06:23.561 --rc geninfo_unexecuted_blocks=1 00:06:23.561 00:06:23.561 ' 00:06:23.561 00:00:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:23.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.561 --rc genhtml_branch_coverage=1 00:06:23.561 --rc genhtml_function_coverage=1 00:06:23.561 --rc genhtml_legend=1 00:06:23.561 --rc geninfo_all_blocks=1 00:06:23.561 --rc geninfo_unexecuted_blocks=1 00:06:23.561 00:06:23.562 ' 00:06:23.562 00:00:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.562 --rc genhtml_branch_coverage=1 00:06:23.562 --rc genhtml_function_coverage=1 00:06:23.562 --rc genhtml_legend=1 00:06:23.562 --rc geninfo_all_blocks=1 00:06:23.562 --rc geninfo_unexecuted_blocks=1 00:06:23.562 00:06:23.562 ' 00:06:23.562 00:00:38 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:23.562 00:00:38 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:23.562 00:00:38 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.562 00:00:38 -- accel/accel.sh@59 -- # spdk_tgt_pid=70435 00:06:23.562 00:00:38 -- accel/accel.sh@60 -- # waitforlisten 70435 00:06:23.562 00:00:38 -- common/autotest_common.sh@829 -- # '[' -z 70435 ']' 00:06:23.562 00:00:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.562 00:00:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.562 00:00:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.562 00:00:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.562 00:00:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.562 00:00:38 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:23.562 00:00:38 -- accel/accel.sh@58 -- # build_accel_config 00:06:23.562 00:00:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.562 00:00:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.562 00:00:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.562 00:00:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.562 00:00:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.562 00:00:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.562 00:00:38 -- accel/accel.sh@42 -- # jq -r . 00:06:23.562 [2024-11-28 00:00:38.135159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.562 [2024-11-28 00:00:38.135455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70435 ] 00:06:23.834 [2024-11-28 00:00:38.284304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.834 [2024-11-28 00:00:38.314252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.834 [2024-11-28 00:00:38.314599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.437 00:00:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.437 00:00:38 -- common/autotest_common.sh@862 -- # return 0 00:06:24.437 00:00:38 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:24.437 00:00:38 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:24.437 00:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.437 00:00:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.437 00:00:38 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:24.437 00:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # IFS== 00:06:24.437 00:00:38 -- accel/accel.sh@64 -- # read -r opc module 00:06:24.437 00:00:38 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:24.437 00:00:38 -- accel/accel.sh@67 -- # killprocess 70435 00:06:24.437 00:00:38 -- common/autotest_common.sh@936 -- # '[' -z 70435 ']' 00:06:24.437 00:00:38 -- common/autotest_common.sh@940 -- # kill -0 70435 00:06:24.437 00:00:38 -- common/autotest_common.sh@941 -- # uname 00:06:24.437 00:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.437 00:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70435 00:06:24.437 00:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.437 00:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.437 00:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70435' 00:06:24.437 killing process with pid 70435 00:06:24.437 00:00:38 -- common/autotest_common.sh@955 -- # kill 70435 00:06:24.437 00:00:38 -- common/autotest_common.sh@960 -- # wait 70435 00:06:24.696 00:00:39 -- accel/accel.sh@68 -- # trap - ERR 00:06:24.696 00:00:39 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:24.696 00:00:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:24.696 00:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.696 00:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.696 00:00:39 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:24.696 00:00:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:24.696 00:00:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.696 00:00:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.696 00:00:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.696 00:00:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.696 00:00:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.696 00:00:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.696 00:00:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.696 00:00:39 -- accel/accel.sh@42 -- # jq -r . 00:06:24.696 00:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.696 00:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.954 00:00:39 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:24.954 00:00:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:24.954 00:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.954 00:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.954 ************************************ 00:06:24.954 START TEST accel_missing_filename 00:06:24.954 ************************************ 00:06:24.954 00:00:39 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:24.954 00:00:39 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.954 00:00:39 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:24.954 00:00:39 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:24.954 00:00:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.954 00:00:39 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:24.954 00:00:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.954 00:00:39 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:24.954 00:00:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:24.954 00:00:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.954 00:00:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.954 00:00:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.954 00:00:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.954 00:00:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.954 00:00:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.954 00:00:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.954 00:00:39 -- accel/accel.sh@42 -- # jq -r . 00:06:24.954 [2024-11-28 00:00:39.335564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.954 [2024-11-28 00:00:39.335669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70483 ] 00:06:24.954 [2024-11-28 00:00:39.483736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.954 [2024-11-28 00:00:39.513316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.954 [2024-11-28 00:00:39.545777] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.213 [2024-11-28 00:00:39.588603] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:25.213 A filename is required. 00:06:25.213 00:00:39 -- common/autotest_common.sh@653 -- # es=234 00:06:25.213 00:00:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.213 00:00:39 -- common/autotest_common.sh@662 -- # es=106 00:06:25.213 00:00:39 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:25.213 00:00:39 -- common/autotest_common.sh@670 -- # es=1 00:06:25.213 ************************************ 00:06:25.213 END TEST accel_missing_filename 00:06:25.213 ************************************ 00:06:25.213 00:00:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.213 00:06:25.213 real 0m0.351s 00:06:25.213 user 0m0.176s 00:06:25.213 sys 0m0.100s 00:06:25.213 00:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.213 00:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:25.213 00:00:39 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.213 00:00:39 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:25.213 00:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.213 00:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:25.213 ************************************ 00:06:25.213 START TEST accel_compress_verify 00:06:25.213 ************************************ 00:06:25.213 00:00:39 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.213 00:00:39 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.213 00:00:39 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.213 00:00:39 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:25.213 00:00:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.213 00:00:39 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:25.213 00:00:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.213 00:00:39 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.213 00:00:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.213 00:00:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.213 00:00:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.213 00:00:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.213 00:00:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.213 00:00:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.213 00:00:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.213 00:00:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.213 00:00:39 -- accel/accel.sh@42 -- # jq -r . 00:06:25.213 [2024-11-28 00:00:39.726314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.213 [2024-11-28 00:00:39.726433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:06:25.473 [2024-11-28 00:00:39.864332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.473 [2024-11-28 00:00:39.896395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.473 [2024-11-28 00:00:39.928580] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.473 [2024-11-28 00:00:39.970937] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:25.473 00:06:25.473 Compression does not support the verify option, aborting. 00:06:25.473 00:00:40 -- common/autotest_common.sh@653 -- # es=161 00:06:25.473 00:00:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.473 00:00:40 -- common/autotest_common.sh@662 -- # es=33 00:06:25.473 00:00:40 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:25.473 00:00:40 -- common/autotest_common.sh@670 -- # es=1 00:06:25.473 00:00:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.473 00:06:25.473 real 0m0.335s 00:06:25.473 user 0m0.168s 00:06:25.473 sys 0m0.093s 00:06:25.473 00:00:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.473 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.473 ************************************ 00:06:25.473 END TEST accel_compress_verify 00:06:25.473 ************************************ 00:06:25.473 00:00:40 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:25.473 00:00:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.473 00:00:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.473 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.473 ************************************ 00:06:25.473 START TEST accel_wrong_workload 00:06:25.473 ************************************ 00:06:25.473 00:00:40 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:25.473 00:00:40 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.473 00:00:40 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:25.473 00:00:40 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:25.473 00:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.473 00:00:40 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:25.473 00:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.473 00:00:40 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:25.473 00:00:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:25.473 00:00:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.473 00:00:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.473 00:00:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.731 00:00:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.731 00:00:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.731 00:00:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.731 00:00:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.731 00:00:40 -- accel/accel.sh@42 -- # jq -r . 00:06:25.731 Unsupported workload type: foobar 00:06:25.731 [2024-11-28 00:00:40.097918] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:25.731 accel_perf options: 00:06:25.731 [-h help message] 00:06:25.731 [-q queue depth per core] 00:06:25.731 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.731 [-T number of threads per core 00:06:25.731 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.731 [-t time in seconds] 00:06:25.731 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.731 [ dif_verify, , dif_generate, dif_generate_copy 00:06:25.731 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.731 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.731 [-S for crc32c workload, use this seed value (default 0) 00:06:25.731 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.731 [-f for fill workload, use this BYTE value (default 255) 00:06:25.731 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.731 [-y verify result if this switch is on] 00:06:25.731 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.731 Can be used to spread operations across a wider range of memory. 00:06:25.731 ************************************ 00:06:25.731 END TEST accel_wrong_workload 00:06:25.731 ************************************ 00:06:25.731 00:00:40 -- common/autotest_common.sh@653 -- # es=1 00:06:25.731 00:00:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.731 00:00:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.731 00:00:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.731 00:06:25.731 real 0m0.049s 00:06:25.731 user 0m0.056s 00:06:25.731 sys 0m0.022s 00:06:25.731 00:00:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.731 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 00:00:40 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.731 00:00:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:25.731 00:00:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.731 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 ************************************ 00:06:25.731 START TEST accel_negative_buffers 00:06:25.731 ************************************ 00:06:25.731 00:00:40 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.731 00:00:40 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.731 00:00:40 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:25.731 00:00:40 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:25.731 00:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.731 00:00:40 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:25.731 00:00:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.731 00:00:40 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:25.731 00:00:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:25.731 00:00:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.731 00:00:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.732 00:00:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.732 00:00:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.732 00:00:40 -- accel/accel.sh@42 -- # jq -r . 00:06:25.732 -x option must be non-negative. 00:06:25.732 [2024-11-28 00:00:40.182718] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:25.732 accel_perf options: 00:06:25.732 [-h help message] 00:06:25.732 [-q queue depth per core] 00:06:25.732 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.732 [-T number of threads per core 00:06:25.732 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.732 [-t time in seconds] 00:06:25.732 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.732 [ dif_verify, , dif_generate, dif_generate_copy 00:06:25.732 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.732 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.732 [-S for crc32c workload, use this seed value (default 0) 00:06:25.732 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.732 [-f for fill workload, use this BYTE value (default 255) 00:06:25.732 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.732 [-y verify result if this switch is on] 00:06:25.732 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.732 Can be used to spread operations across a wider range of memory. 00:06:25.732 00:00:40 -- common/autotest_common.sh@653 -- # es=1 00:06:25.732 00:00:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.732 00:00:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.732 00:00:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.732 00:06:25.732 real 0m0.045s 00:06:25.732 user 0m0.052s 00:06:25.732 sys 0m0.022s 00:06:25.732 00:00:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.732 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.732 ************************************ 00:06:25.732 END TEST accel_negative_buffers 00:06:25.732 ************************************ 00:06:25.732 00:00:40 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:25.732 00:00:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:25.732 00:00:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.732 00:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.732 ************************************ 00:06:25.732 START TEST accel_crc32c 00:06:25.732 ************************************ 00:06:25.732 00:00:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:25.732 00:00:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.732 00:00:40 -- accel/accel.sh@17 -- # local accel_module 00:06:25.732 00:00:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:25.732 00:00:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:25.732 00:00:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.732 00:00:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.732 00:00:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.732 00:00:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.732 00:00:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.732 00:00:40 -- accel/accel.sh@42 -- # jq -r . 00:06:25.732 [2024-11-28 00:00:40.274954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.732 [2024-11-28 00:00:40.275057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70570 ] 00:06:25.991 [2024-11-28 00:00:40.423683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.991 [2024-11-28 00:00:40.454754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.366 00:00:41 -- accel/accel.sh@18 -- # out=' 00:06:27.366 SPDK Configuration: 00:06:27.366 Core mask: 0x1 00:06:27.366 00:06:27.366 Accel Perf Configuration: 00:06:27.366 Workload Type: crc32c 00:06:27.366 CRC-32C seed: 32 00:06:27.366 Transfer size: 4096 bytes 00:06:27.366 Vector count 1 00:06:27.366 Module: software 00:06:27.366 Queue depth: 32 00:06:27.366 Allocate depth: 32 00:06:27.366 # threads/core: 1 00:06:27.366 Run time: 1 seconds 00:06:27.366 Verify: Yes 00:06:27.366 00:06:27.366 Running for 1 seconds... 00:06:27.366 00:06:27.366 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.366 ------------------------------------------------------------------------------------ 00:06:27.366 0,0 459200/s 1793 MiB/s 0 0 00:06:27.366 ==================================================================================== 00:06:27.366 Total 459200/s 1793 MiB/s 0 0' 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:27.366 00:00:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:27.366 00:00:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.366 00:00:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.366 00:00:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.366 00:00:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.366 00:00:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.366 00:00:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.366 00:00:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.366 00:00:41 -- accel/accel.sh@42 -- # jq -r . 00:06:27.366 [2024-11-28 00:00:41.625840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.366 [2024-11-28 00:00:41.625935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70591 ] 00:06:27.366 [2024-11-28 00:00:41.768158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.366 [2024-11-28 00:00:41.797841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=0x1 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=crc32c 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=32 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=software 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=32 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=32 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=1 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val=Yes 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.366 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.366 00:00:41 -- accel/accel.sh@21 -- # val= 00:06:27.366 00:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.367 00:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.367 00:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.741 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.741 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.741 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.741 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.741 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.741 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.741 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.741 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.741 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.742 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.742 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.742 00:00:42 -- accel/accel.sh@21 -- # val= 00:06:28.742 00:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:28.742 00:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:28.742 00:00:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.742 00:00:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:28.742 00:00:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.742 00:06:28.742 real 0m2.680s 00:06:28.742 user 0m1.146s 00:06:28.742 sys 0m0.109s 00:06:28.742 ************************************ 00:06:28.742 END TEST accel_crc32c 00:06:28.742 ************************************ 00:06:28.742 00:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.742 00:00:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.742 00:00:42 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:28.742 00:00:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.742 00:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.742 00:00:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.742 ************************************ 00:06:28.742 START TEST accel_crc32c_C2 00:06:28.742 ************************************ 00:06:28.742 00:00:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:28.742 00:00:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.742 00:00:42 -- accel/accel.sh@17 -- # local accel_module 00:06:28.742 00:00:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:28.742 00:00:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:28.742 00:00:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.742 00:00:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.742 00:00:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.742 00:00:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.742 00:00:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.742 00:00:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.742 00:00:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.742 00:00:42 -- accel/accel.sh@42 -- # jq -r . 00:06:28.742 [2024-11-28 00:00:43.001570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.742 [2024-11-28 00:00:43.001661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:06:28.742 [2024-11-28 00:00:43.141134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.742 [2024-11-28 00:00:43.168410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.113 00:00:44 -- accel/accel.sh@18 -- # out=' 00:06:30.113 SPDK Configuration: 00:06:30.113 Core mask: 0x1 00:06:30.113 00:06:30.113 Accel Perf Configuration: 00:06:30.113 Workload Type: crc32c 00:06:30.113 CRC-32C seed: 0 00:06:30.113 Transfer size: 4096 bytes 00:06:30.113 Vector count 2 00:06:30.113 Module: software 00:06:30.113 Queue depth: 32 00:06:30.113 Allocate depth: 32 00:06:30.113 # threads/core: 1 00:06:30.113 Run time: 1 seconds 00:06:30.113 Verify: Yes 00:06:30.113 00:06:30.113 Running for 1 seconds... 00:06:30.113 00:06:30.113 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.113 ------------------------------------------------------------------------------------ 00:06:30.113 0,0 508992/s 3976 MiB/s 0 0 00:06:30.113 ==================================================================================== 00:06:30.113 Total 508992/s 1988 MiB/s 0 0' 00:06:30.113 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.113 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.113 00:00:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.113 00:00:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.113 00:00:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.113 00:00:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.113 00:00:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.113 00:00:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.113 00:00:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.113 00:00:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.113 00:00:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.113 00:00:44 -- accel/accel.sh@42 -- # jq -r . 00:06:30.113 [2024-11-28 00:00:44.326707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.113 [2024-11-28 00:00:44.326814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70641 ] 00:06:30.113 [2024-11-28 00:00:44.474251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.114 [2024-11-28 00:00:44.501101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=0x1 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=crc32c 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=0 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=software 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=32 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=32 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=1 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val=Yes 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.114 00:00:44 -- accel/accel.sh@21 -- # val= 00:06:30.114 00:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.114 00:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.058 00:00:45 -- accel/accel.sh@21 -- # val= 00:06:31.058 00:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.058 00:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.059 00:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.059 00:00:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.059 00:00:45 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:31.059 00:00:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.059 00:06:31.059 real 0m2.661s 00:06:31.059 user 0m2.264s 00:06:31.059 sys 0m0.199s 00:06:31.059 00:00:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.059 00:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.059 ************************************ 00:06:31.059 END TEST accel_crc32c_C2 00:06:31.059 ************************************ 00:06:31.317 00:00:45 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:31.317 00:00:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.317 00:00:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.317 00:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.317 ************************************ 00:06:31.317 START TEST accel_copy 00:06:31.317 ************************************ 00:06:31.317 00:00:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:31.317 00:00:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.317 00:00:45 -- accel/accel.sh@17 -- # local accel_module 00:06:31.317 00:00:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:31.317 00:00:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:31.317 00:00:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.317 00:00:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.317 00:00:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.317 00:00:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.317 00:00:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.317 00:00:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.317 00:00:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.317 00:00:45 -- accel/accel.sh@42 -- # jq -r . 00:06:31.317 [2024-11-28 00:00:45.702344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.317 [2024-11-28 00:00:45.702452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70677 ] 00:06:31.317 [2024-11-28 00:00:45.844500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.317 [2024-11-28 00:00:45.872460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.695 00:00:47 -- accel/accel.sh@18 -- # out=' 00:06:32.695 SPDK Configuration: 00:06:32.695 Core mask: 0x1 00:06:32.695 00:06:32.695 Accel Perf Configuration: 00:06:32.695 Workload Type: copy 00:06:32.695 Transfer size: 4096 bytes 00:06:32.695 Vector count 1 00:06:32.695 Module: software 00:06:32.695 Queue depth: 32 00:06:32.695 Allocate depth: 32 00:06:32.695 # threads/core: 1 00:06:32.695 Run time: 1 seconds 00:06:32.695 Verify: Yes 00:06:32.695 00:06:32.695 Running for 1 seconds... 00:06:32.695 00:06:32.695 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.695 ------------------------------------------------------------------------------------ 00:06:32.695 0,0 385088/s 1504 MiB/s 0 0 00:06:32.695 ==================================================================================== 00:06:32.695 Total 385088/s 1504 MiB/s 0 0' 00:06:32.695 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.695 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.695 00:00:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:32.695 00:00:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:32.695 00:00:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.695 00:00:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.695 00:00:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.696 00:00:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.696 00:00:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.696 00:00:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.696 00:00:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.696 00:00:47 -- accel/accel.sh@42 -- # jq -r . 00:06:32.696 [2024-11-28 00:00:47.037404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.696 [2024-11-28 00:00:47.037513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70692 ] 00:06:32.696 [2024-11-28 00:00:47.182232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.696 [2024-11-28 00:00:47.209602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=0x1 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=copy 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=software 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=32 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=32 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=1 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val=Yes 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:32.696 00:00:47 -- accel/accel.sh@21 -- # val= 00:06:32.696 00:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:32.696 00:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@21 -- # val= 00:06:34.075 00:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:34.075 00:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:34.075 00:00:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.075 00:00:48 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:34.075 00:00:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.075 00:06:34.075 real 0m2.661s 00:06:34.075 user 0m2.260s 00:06:34.075 sys 0m0.203s 00:06:34.075 00:00:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.075 00:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.075 ************************************ 00:06:34.075 END TEST accel_copy 00:06:34.075 ************************************ 00:06:34.075 00:00:48 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.075 00:00:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:34.075 00:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.075 00:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.075 ************************************ 00:06:34.075 START TEST accel_fill 00:06:34.075 ************************************ 00:06:34.075 00:00:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.075 00:00:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.075 00:00:48 -- accel/accel.sh@17 -- # local accel_module 00:06:34.075 00:00:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.075 00:00:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.075 00:00:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.075 00:00:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.075 00:00:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.075 00:00:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.075 00:00:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.075 00:00:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.075 00:00:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.075 00:00:48 -- accel/accel.sh@42 -- # jq -r . 00:06:34.075 [2024-11-28 00:00:48.398597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.075 [2024-11-28 00:00:48.398899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:06:34.075 [2024-11-28 00:00:48.542831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.075 [2024-11-28 00:00:48.570527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.451 00:00:49 -- accel/accel.sh@18 -- # out=' 00:06:35.451 SPDK Configuration: 00:06:35.451 Core mask: 0x1 00:06:35.451 00:06:35.451 Accel Perf Configuration: 00:06:35.451 Workload Type: fill 00:06:35.451 Fill pattern: 0x80 00:06:35.451 Transfer size: 4096 bytes 00:06:35.451 Vector count 1 00:06:35.451 Module: software 00:06:35.451 Queue depth: 64 00:06:35.451 Allocate depth: 64 00:06:35.451 # threads/core: 1 00:06:35.451 Run time: 1 seconds 00:06:35.451 Verify: Yes 00:06:35.451 00:06:35.451 Running for 1 seconds... 00:06:35.451 00:06:35.451 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.451 ------------------------------------------------------------------------------------ 00:06:35.451 0,0 611648/s 2389 MiB/s 0 0 00:06:35.451 ==================================================================================== 00:06:35.451 Total 611648/s 2389 MiB/s 0 0' 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.451 00:00:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.451 00:00:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.451 00:00:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.451 00:00:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.451 00:00:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.451 00:00:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.451 00:00:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.451 00:00:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.451 [2024-11-28 00:00:49.731166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.451 [2024-11-28 00:00:49.731274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:06:35.451 [2024-11-28 00:00:49.877784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.451 [2024-11-28 00:00:49.904994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val=0x1 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val=fill 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val=0x80 00:06:35.451 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.451 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.451 00:00:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val=software 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val=64 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val=64 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val=1 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val=Yes 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.452 00:00:49 -- accel/accel.sh@21 -- # val= 00:06:35.452 00:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.452 00:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@21 -- # val= 00:06:36.833 00:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.833 00:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.833 00:00:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.833 00:00:51 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:36.833 00:00:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.833 00:06:36.833 real 0m2.676s 00:06:36.833 user 0m2.271s 00:06:36.833 sys 0m0.201s 00:06:36.833 00:00:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.833 ************************************ 00:06:36.833 END TEST accel_fill 00:06:36.833 ************************************ 00:06:36.833 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.833 00:00:51 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:36.833 00:00:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.833 00:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.833 00:00:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.833 ************************************ 00:06:36.833 START TEST accel_copy_crc32c 00:06:36.833 ************************************ 00:06:36.833 00:00:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:36.833 00:00:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.833 00:00:51 -- accel/accel.sh@17 -- # local accel_module 00:06:36.833 00:00:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.833 00:00:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.833 00:00:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.833 00:00:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.833 00:00:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.833 00:00:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.833 00:00:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.833 00:00:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.833 00:00:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.833 00:00:51 -- accel/accel.sh@42 -- # jq -r . 00:06:36.833 [2024-11-28 00:00:51.136033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.833 [2024-11-28 00:00:51.136166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:06:36.833 [2024-11-28 00:00:51.280135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.833 [2024-11-28 00:00:51.339592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.217 00:00:52 -- accel/accel.sh@18 -- # out=' 00:06:38.217 SPDK Configuration: 00:06:38.217 Core mask: 0x1 00:06:38.217 00:06:38.217 Accel Perf Configuration: 00:06:38.217 Workload Type: copy_crc32c 00:06:38.217 CRC-32C seed: 0 00:06:38.217 Vector size: 4096 bytes 00:06:38.217 Transfer size: 4096 bytes 00:06:38.217 Vector count 1 00:06:38.217 Module: software 00:06:38.217 Queue depth: 32 00:06:38.217 Allocate depth: 32 00:06:38.217 # threads/core: 1 00:06:38.217 Run time: 1 seconds 00:06:38.217 Verify: Yes 00:06:38.217 00:06:38.217 Running for 1 seconds... 00:06:38.217 00:06:38.217 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.217 ------------------------------------------------------------------------------------ 00:06:38.217 0,0 254400/s 993 MiB/s 0 0 00:06:38.217 ==================================================================================== 00:06:38.217 Total 254400/s 993 MiB/s 0 0' 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:38.217 00:00:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.217 00:00:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.217 00:00:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.217 00:00:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.217 00:00:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.217 00:00:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.217 00:00:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.217 00:00:52 -- accel/accel.sh@42 -- # jq -r . 00:06:38.217 [2024-11-28 00:00:52.516547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.217 [2024-11-28 00:00:52.516662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70798 ] 00:06:38.217 [2024-11-28 00:00:52.661678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.217 [2024-11-28 00:00:52.689921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=0x1 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=0 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=software 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=32 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=32 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=1 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val=Yes 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:38.217 00:00:52 -- accel/accel.sh@21 -- # val= 00:06:38.217 00:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:38.217 00:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@21 -- # val= 00:06:39.612 00:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:39.612 00:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:39.612 00:00:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.612 00:00:53 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:39.612 ************************************ 00:06:39.612 END TEST accel_copy_crc32c 00:06:39.612 ************************************ 00:06:39.612 00:00:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.612 00:06:39.612 real 0m2.714s 00:06:39.612 user 0m2.297s 00:06:39.612 sys 0m0.216s 00:06:39.612 00:00:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.612 00:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.612 00:00:53 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:39.612 00:00:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.612 00:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.612 00:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.612 ************************************ 00:06:39.612 START TEST accel_copy_crc32c_C2 00:06:39.612 ************************************ 00:06:39.612 00:00:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:39.612 00:00:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.612 00:00:53 -- accel/accel.sh@17 -- # local accel_module 00:06:39.612 00:00:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:39.612 00:00:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:39.612 00:00:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.612 00:00:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.612 00:00:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.612 00:00:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.612 00:00:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.612 00:00:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.612 00:00:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.612 00:00:53 -- accel/accel.sh@42 -- # jq -r . 00:06:39.612 [2024-11-28 00:00:53.906398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.612 [2024-11-28 00:00:53.906562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70834 ] 00:06:39.612 [2024-11-28 00:00:54.046578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.612 [2024-11-28 00:00:54.076721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.000 00:00:55 -- accel/accel.sh@18 -- # out=' 00:06:41.000 SPDK Configuration: 00:06:41.000 Core mask: 0x1 00:06:41.000 00:06:41.000 Accel Perf Configuration: 00:06:41.000 Workload Type: copy_crc32c 00:06:41.000 CRC-32C seed: 0 00:06:41.000 Vector size: 4096 bytes 00:06:41.000 Transfer size: 8192 bytes 00:06:41.000 Vector count 2 00:06:41.000 Module: software 00:06:41.000 Queue depth: 32 00:06:41.000 Allocate depth: 32 00:06:41.000 # threads/core: 1 00:06:41.000 Run time: 1 seconds 00:06:41.000 Verify: Yes 00:06:41.000 00:06:41.000 Running for 1 seconds... 00:06:41.000 00:06:41.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.000 ------------------------------------------------------------------------------------ 00:06:41.000 0,0 232800/s 1818 MiB/s 0 0 00:06:41.000 ==================================================================================== 00:06:41.000 Total 232800/s 909 MiB/s 0 0' 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.000 00:00:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.000 00:00:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.000 00:00:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.000 00:00:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.000 00:00:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.000 00:00:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.000 00:00:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.000 00:00:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.000 00:00:55 -- accel/accel.sh@42 -- # jq -r . 00:06:41.000 [2024-11-28 00:00:55.241104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.000 [2024-11-28 00:00:55.241217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:06:41.000 [2024-11-28 00:00:55.386664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.000 [2024-11-28 00:00:55.418227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=0x1 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=0 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=software 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=32 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=32 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=1 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val=Yes 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.000 00:00:55 -- accel/accel.sh@21 -- # val= 00:06:41.000 00:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:41.000 00:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:41.942 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:41.942 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:41.942 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:41.942 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:41.942 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:41.942 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:41.942 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:41.942 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:42.204 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.204 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:42.204 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.204 00:00:56 -- accel/accel.sh@21 -- # val= 00:06:42.204 00:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:42.204 00:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:42.204 00:00:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.204 00:00:56 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:42.204 00:00:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.204 00:06:42.204 real 0m2.672s 00:06:42.204 user 0m2.264s 00:06:42.204 sys 0m0.208s 00:06:42.204 ************************************ 00:06:42.204 END TEST accel_copy_crc32c_C2 00:06:42.204 ************************************ 00:06:42.204 00:00:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.204 00:00:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.204 00:00:56 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:42.204 00:00:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.204 00:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.204 00:00:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.204 ************************************ 00:06:42.204 START TEST accel_dualcast 00:06:42.204 ************************************ 00:06:42.204 00:00:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:42.204 00:00:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.204 00:00:56 -- accel/accel.sh@17 -- # local accel_module 00:06:42.204 00:00:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:42.204 00:00:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:42.204 00:00:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.204 00:00:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.204 00:00:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.204 00:00:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.204 00:00:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.204 00:00:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.204 00:00:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.204 00:00:56 -- accel/accel.sh@42 -- # jq -r . 00:06:42.204 [2024-11-28 00:00:56.638788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.204 [2024-11-28 00:00:56.638898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70884 ] 00:06:42.204 [2024-11-28 00:00:56.782208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.466 [2024-11-28 00:00:56.810021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.410 00:00:57 -- accel/accel.sh@18 -- # out=' 00:06:43.410 SPDK Configuration: 00:06:43.410 Core mask: 0x1 00:06:43.410 00:06:43.410 Accel Perf Configuration: 00:06:43.410 Workload Type: dualcast 00:06:43.410 Transfer size: 4096 bytes 00:06:43.410 Vector count 1 00:06:43.410 Module: software 00:06:43.410 Queue depth: 32 00:06:43.410 Allocate depth: 32 00:06:43.410 # threads/core: 1 00:06:43.410 Run time: 1 seconds 00:06:43.410 Verify: Yes 00:06:43.410 00:06:43.410 Running for 1 seconds... 00:06:43.410 00:06:43.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.410 ------------------------------------------------------------------------------------ 00:06:43.410 0,0 460256/s 1797 MiB/s 0 0 00:06:43.410 ==================================================================================== 00:06:43.410 Total 460256/s 1797 MiB/s 0 0' 00:06:43.410 00:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:43.410 00:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:43.410 00:00:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:43.410 00:00:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:43.410 00:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.410 00:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.410 00:00:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.410 00:00:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.410 00:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.410 00:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.410 00:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.410 00:00:57 -- accel/accel.sh@42 -- # jq -r . 00:06:43.410 [2024-11-28 00:00:57.964832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.410 [2024-11-28 00:00:57.964943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:06:43.672 [2024-11-28 00:00:58.112523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.672 [2024-11-28 00:00:58.157816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=0x1 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=dualcast 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=software 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=32 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=32 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=1 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val=Yes 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:43.672 00:00:58 -- accel/accel.sh@21 -- # val= 00:06:43.672 00:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:43.672 00:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@21 -- # val= 00:06:45.060 00:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.060 00:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.060 00:00:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.060 00:00:59 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:45.060 ************************************ 00:06:45.060 END TEST accel_dualcast 00:06:45.060 ************************************ 00:06:45.060 00:00:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.060 00:06:45.060 real 0m2.747s 00:06:45.060 user 0m2.308s 00:06:45.060 sys 0m0.237s 00:06:45.060 00:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.060 00:00:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.060 00:00:59 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.060 00:00:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.060 00:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.060 00:00:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.060 ************************************ 00:06:45.060 START TEST accel_compare 00:06:45.060 ************************************ 00:06:45.060 00:00:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:45.060 00:00:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.060 00:00:59 -- accel/accel.sh@17 -- # local accel_module 00:06:45.060 00:00:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:45.060 00:00:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.060 00:00:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.060 00:00:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.060 00:00:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.060 00:00:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.060 00:00:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.060 00:00:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.060 00:00:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.060 00:00:59 -- accel/accel.sh@42 -- # jq -r . 00:06:45.060 [2024-11-28 00:00:59.438656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.060 [2024-11-28 00:00:59.438793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:06:45.060 [2024-11-28 00:00:59.591804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.060 [2024-11-28 00:00:59.641917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.449 00:01:00 -- accel/accel.sh@18 -- # out=' 00:06:46.449 SPDK Configuration: 00:06:46.449 Core mask: 0x1 00:06:46.449 00:06:46.449 Accel Perf Configuration: 00:06:46.449 Workload Type: compare 00:06:46.449 Transfer size: 4096 bytes 00:06:46.449 Vector count 1 00:06:46.449 Module: software 00:06:46.449 Queue depth: 32 00:06:46.449 Allocate depth: 32 00:06:46.449 # threads/core: 1 00:06:46.449 Run time: 1 seconds 00:06:46.449 Verify: Yes 00:06:46.449 00:06:46.449 Running for 1 seconds... 00:06:46.449 00:06:46.449 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.449 ------------------------------------------------------------------------------------ 00:06:46.449 0,0 428576/s 1674 MiB/s 0 0 00:06:46.449 ==================================================================================== 00:06:46.449 Total 428576/s 1674 MiB/s 0 0' 00:06:46.449 00:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:46.449 00:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:46.449 00:01:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:46.449 00:01:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.449 00:01:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.449 00:01:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.449 00:01:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.449 00:01:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.449 00:01:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.449 00:01:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.449 00:01:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.449 00:01:00 -- accel/accel.sh@42 -- # jq -r . 00:06:46.449 [2024-11-28 00:01:00.877717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.449 [2024-11-28 00:01:00.877854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:06:46.449 [2024-11-28 00:01:01.028410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.714 [2024-11-28 00:01:01.076007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.714 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.714 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.714 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.714 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.714 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.714 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.714 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.714 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.714 00:01:01 -- accel/accel.sh@21 -- # val=0x1 00:06:46.714 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.714 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=compare 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=software 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=32 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=32 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=1 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val=Yes 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:46.715 00:01:01 -- accel/accel.sh@21 -- # val= 00:06:46.715 00:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:46.715 00:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 00:01:02 -- accel/accel.sh@21 -- # val= 00:06:47.663 00:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:47.663 00:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:47.663 ************************************ 00:06:47.663 END TEST accel_compare 00:06:47.663 ************************************ 00:06:47.663 00:01:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.663 00:01:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:47.663 00:01:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.663 00:06:47.663 real 0m2.841s 00:06:47.663 user 0m2.342s 00:06:47.663 sys 0m0.291s 00:06:47.663 00:01:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.663 00:01:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.921 00:01:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.921 00:01:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.921 00:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.921 00:01:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.921 ************************************ 00:06:47.921 START TEST accel_xor 00:06:47.921 ************************************ 00:06:47.921 00:01:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:47.921 00:01:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.921 00:01:02 -- accel/accel.sh@17 -- # local accel_module 00:06:47.921 00:01:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:47.921 00:01:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.921 00:01:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.921 00:01:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.921 00:01:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.921 00:01:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.921 00:01:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.921 00:01:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.921 00:01:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.921 00:01:02 -- accel/accel.sh@42 -- # jq -r . 00:06:47.921 [2024-11-28 00:01:02.338668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.921 [2024-11-28 00:01:02.338750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:06:47.921 [2024-11-28 00:01:02.480956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.921 [2024-11-28 00:01:02.512671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.298 00:01:03 -- accel/accel.sh@18 -- # out=' 00:06:49.298 SPDK Configuration: 00:06:49.298 Core mask: 0x1 00:06:49.298 00:06:49.298 Accel Perf Configuration: 00:06:49.298 Workload Type: xor 00:06:49.298 Source buffers: 2 00:06:49.298 Transfer size: 4096 bytes 00:06:49.298 Vector count 1 00:06:49.298 Module: software 00:06:49.298 Queue depth: 32 00:06:49.298 Allocate depth: 32 00:06:49.298 # threads/core: 1 00:06:49.298 Run time: 1 seconds 00:06:49.298 Verify: Yes 00:06:49.298 00:06:49.298 Running for 1 seconds... 00:06:49.298 00:06:49.298 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.298 ------------------------------------------------------------------------------------ 00:06:49.298 0,0 335360/s 1310 MiB/s 0 0 00:06:49.298 ==================================================================================== 00:06:49.298 Total 335360/s 1310 MiB/s 0 0' 00:06:49.298 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.298 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.298 00:01:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:49.298 00:01:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.298 00:01:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.298 00:01:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.298 00:01:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.298 00:01:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.298 00:01:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.298 00:01:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.298 00:01:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.298 00:01:03 -- accel/accel.sh@42 -- # jq -r . 00:06:49.298 [2024-11-28 00:01:03.692636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.298 [2024-11-28 00:01:03.692739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:06:49.298 [2024-11-28 00:01:03.840832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.298 [2024-11-28 00:01:03.872397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=0x1 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=xor 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=2 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=software 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=32 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=32 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=1 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val=Yes 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:49.558 00:01:03 -- accel/accel.sh@21 -- # val= 00:06:49.558 00:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:49.558 00:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@21 -- # val= 00:06:50.491 00:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:50.491 00:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:50.491 00:01:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.491 00:01:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.491 00:01:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.491 ************************************ 00:06:50.491 END TEST accel_xor 00:06:50.491 ************************************ 00:06:50.491 00:06:50.491 real 0m2.711s 00:06:50.491 user 0m2.284s 00:06:50.491 sys 0m0.223s 00:06:50.491 00:01:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.491 00:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.491 00:01:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:50.491 00:01:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.491 00:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.491 00:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.491 ************************************ 00:06:50.491 START TEST accel_xor 00:06:50.491 ************************************ 00:06:50.491 00:01:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:50.491 00:01:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.491 00:01:05 -- accel/accel.sh@17 -- # local accel_module 00:06:50.491 00:01:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:50.491 00:01:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:50.491 00:01:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.491 00:01:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.491 00:01:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.491 00:01:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.491 00:01:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.491 00:01:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.491 00:01:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.491 00:01:05 -- accel/accel.sh@42 -- # jq -r . 00:06:50.748 [2024-11-28 00:01:05.121855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.748 [2024-11-28 00:01:05.121958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71047 ] 00:06:50.748 [2024-11-28 00:01:05.266250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.748 [2024-11-28 00:01:05.299486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.125 00:01:06 -- accel/accel.sh@18 -- # out=' 00:06:52.125 SPDK Configuration: 00:06:52.125 Core mask: 0x1 00:06:52.125 00:06:52.125 Accel Perf Configuration: 00:06:52.125 Workload Type: xor 00:06:52.125 Source buffers: 3 00:06:52.125 Transfer size: 4096 bytes 00:06:52.125 Vector count 1 00:06:52.125 Module: software 00:06:52.125 Queue depth: 32 00:06:52.125 Allocate depth: 32 00:06:52.125 # threads/core: 1 00:06:52.125 Run time: 1 seconds 00:06:52.125 Verify: Yes 00:06:52.125 00:06:52.125 Running for 1 seconds... 00:06:52.125 00:06:52.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.125 ------------------------------------------------------------------------------------ 00:06:52.125 0,0 317120/s 1238 MiB/s 0 0 00:06:52.125 ==================================================================================== 00:06:52.125 Total 317120/s 1238 MiB/s 0 0' 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.125 00:01:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.125 00:01:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.125 00:01:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.125 00:01:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.125 00:01:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.125 00:01:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.125 00:01:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.125 00:01:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.125 00:01:06 -- accel/accel.sh@42 -- # jq -r . 00:06:52.125 [2024-11-28 00:01:06.482930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.125 [2024-11-28 00:01:06.483038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71062 ] 00:06:52.125 [2024-11-28 00:01:06.631734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.125 [2024-11-28 00:01:06.663219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=0x1 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=xor 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=3 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=software 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=32 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=32 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=1 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val=Yes 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.125 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.125 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.125 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:52.126 00:01:06 -- accel/accel.sh@21 -- # val= 00:06:52.126 00:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.126 00:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:52.126 00:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@21 -- # val= 00:06:53.502 00:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:53.502 00:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:53.502 00:01:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.502 00:01:07 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:53.502 00:01:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.502 00:06:53.502 real 0m2.722s 00:06:53.502 user 0m2.295s 00:06:53.502 sys 0m0.222s 00:06:53.502 00:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.502 ************************************ 00:06:53.502 END TEST accel_xor 00:06:53.502 ************************************ 00:06:53.502 00:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.502 00:01:07 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:53.502 00:01:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.502 00:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.502 00:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.502 ************************************ 00:06:53.502 START TEST accel_dif_verify 00:06:53.502 ************************************ 00:06:53.502 00:01:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:53.502 00:01:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.502 00:01:07 -- accel/accel.sh@17 -- # local accel_module 00:06:53.502 00:01:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:53.502 00:01:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.502 00:01:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.502 00:01:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.502 00:01:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.502 00:01:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.502 00:01:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.502 00:01:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.502 00:01:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.502 00:01:07 -- accel/accel.sh@42 -- # jq -r . 00:06:53.502 [2024-11-28 00:01:07.897161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.502 [2024-11-28 00:01:07.897267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71092 ] 00:06:53.502 [2024-11-28 00:01:08.043493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.502 [2024-11-28 00:01:08.075722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.878 00:01:09 -- accel/accel.sh@18 -- # out=' 00:06:54.878 SPDK Configuration: 00:06:54.878 Core mask: 0x1 00:06:54.878 00:06:54.878 Accel Perf Configuration: 00:06:54.878 Workload Type: dif_verify 00:06:54.878 Vector size: 4096 bytes 00:06:54.878 Transfer size: 4096 bytes 00:06:54.878 Block size: 512 bytes 00:06:54.878 Metadata size: 8 bytes 00:06:54.878 Vector count 1 00:06:54.878 Module: software 00:06:54.878 Queue depth: 32 00:06:54.878 Allocate depth: 32 00:06:54.878 # threads/core: 1 00:06:54.878 Run time: 1 seconds 00:06:54.878 Verify: No 00:06:54.878 00:06:54.878 Running for 1 seconds... 00:06:54.878 00:06:54.878 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.878 ------------------------------------------------------------------------------------ 00:06:54.878 0,0 98400/s 390 MiB/s 0 0 00:06:54.878 ==================================================================================== 00:06:54.878 Total 98400/s 384 MiB/s 0 0' 00:06:54.878 00:01:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.878 00:01:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.878 00:01:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.878 00:01:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.878 00:01:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.878 00:01:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.878 00:01:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.878 00:01:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.878 00:01:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.878 00:01:09 -- accel/accel.sh@42 -- # jq -r . 00:06:54.878 [2024-11-28 00:01:09.249755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.878 [2024-11-28 00:01:09.249867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71120 ] 00:06:54.878 [2024-11-28 00:01:09.397302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.878 [2024-11-28 00:01:09.427623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.878 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.878 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.878 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.878 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.878 00:01:09 -- accel/accel.sh@21 -- # val=0x1 00:06:54.878 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.878 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.878 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.878 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=dif_verify 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=software 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=32 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=32 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=1 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val=No 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.879 00:01:09 -- accel/accel.sh@21 -- # val= 00:06:54.879 00:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.879 00:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@21 -- # val= 00:06:56.254 00:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:56.254 00:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:56.254 00:01:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.254 00:01:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:56.254 00:01:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.254 00:06:56.254 real 0m2.707s 00:06:56.254 user 0m2.307s 00:06:56.254 sys 0m0.197s 00:06:56.254 00:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.254 00:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.254 ************************************ 00:06:56.254 END TEST accel_dif_verify 00:06:56.254 ************************************ 00:06:56.254 00:01:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.254 00:01:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:56.254 00:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.254 00:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.254 ************************************ 00:06:56.254 START TEST accel_dif_generate 00:06:56.254 ************************************ 00:06:56.254 00:01:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:56.254 00:01:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.254 00:01:10 -- accel/accel.sh@17 -- # local accel_module 00:06:56.254 00:01:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:56.254 00:01:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.254 00:01:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.254 00:01:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.254 00:01:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.254 00:01:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.254 00:01:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.254 00:01:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.254 00:01:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.254 00:01:10 -- accel/accel.sh@42 -- # jq -r . 00:06:56.254 [2024-11-28 00:01:10.663605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.254 [2024-11-28 00:01:10.663800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:06:56.254 [2024-11-28 00:01:10.809424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.254 [2024-11-28 00:01:10.839873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.629 00:01:11 -- accel/accel.sh@18 -- # out=' 00:06:57.629 SPDK Configuration: 00:06:57.629 Core mask: 0x1 00:06:57.629 00:06:57.629 Accel Perf Configuration: 00:06:57.629 Workload Type: dif_generate 00:06:57.629 Vector size: 4096 bytes 00:06:57.629 Transfer size: 4096 bytes 00:06:57.629 Block size: 512 bytes 00:06:57.629 Metadata size: 8 bytes 00:06:57.629 Vector count 1 00:06:57.629 Module: software 00:06:57.629 Queue depth: 32 00:06:57.629 Allocate depth: 32 00:06:57.629 # threads/core: 1 00:06:57.629 Run time: 1 seconds 00:06:57.629 Verify: No 00:06:57.629 00:06:57.629 Running for 1 seconds... 00:06:57.629 00:06:57.629 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.629 ------------------------------------------------------------------------------------ 00:06:57.630 0,0 118304/s 469 MiB/s 0 0 00:06:57.630 ==================================================================================== 00:06:57.630 Total 118304/s 462 MiB/s 0 0' 00:06:57.630 00:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:57.630 00:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:57.630 00:01:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:57.630 00:01:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:57.630 00:01:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.630 00:01:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.630 00:01:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.630 00:01:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.630 00:01:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.630 00:01:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.630 00:01:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.630 00:01:11 -- accel/accel.sh@42 -- # jq -r . 00:06:57.630 [2024-11-28 00:01:12.017087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.630 [2024-11-28 00:01:12.017195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71165 ] 00:06:57.630 [2024-11-28 00:01:12.163240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.630 [2024-11-28 00:01:12.193428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val=0x1 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val=dif_generate 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.890 00:01:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.890 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.890 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val=software 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val=32 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val=32 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val=1 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val=No 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:57.891 00:01:12 -- accel/accel.sh@21 -- # val= 00:06:57.891 00:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:57.891 00:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@21 -- # val= 00:06:58.827 00:01:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # IFS=: 00:06:58.827 00:01:13 -- accel/accel.sh@20 -- # read -r var val 00:06:58.827 00:01:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.827 00:01:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:58.827 00:01:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.827 00:06:58.827 real 0m2.706s 00:06:58.827 user 0m2.295s 00:06:58.827 sys 0m0.207s 00:06:58.827 ************************************ 00:06:58.827 END TEST accel_dif_generate 00:06:58.827 ************************************ 00:06:58.827 00:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.827 00:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.827 00:01:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:58.827 00:01:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:58.827 00:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.827 00:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.827 ************************************ 00:06:58.827 START TEST accel_dif_generate_copy 00:06:58.827 ************************************ 00:06:58.827 00:01:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:58.827 00:01:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.827 00:01:13 -- accel/accel.sh@17 -- # local accel_module 00:06:58.827 00:01:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:58.827 00:01:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:58.827 00:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.827 00:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.827 00:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.827 00:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.827 00:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.827 00:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.827 00:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.827 00:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:59.085 [2024-11-28 00:01:13.428466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.086 [2024-11-28 00:01:13.428671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71206 ] 00:06:59.086 [2024-11-28 00:01:13.570422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.086 [2024-11-28 00:01:13.600811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.465 00:01:14 -- accel/accel.sh@18 -- # out=' 00:07:00.465 SPDK Configuration: 00:07:00.465 Core mask: 0x1 00:07:00.465 00:07:00.465 Accel Perf Configuration: 00:07:00.465 Workload Type: dif_generate_copy 00:07:00.465 Vector size: 4096 bytes 00:07:00.465 Transfer size: 4096 bytes 00:07:00.465 Vector count 1 00:07:00.465 Module: software 00:07:00.465 Queue depth: 32 00:07:00.465 Allocate depth: 32 00:07:00.465 # threads/core: 1 00:07:00.465 Run time: 1 seconds 00:07:00.465 Verify: No 00:07:00.465 00:07:00.465 Running for 1 seconds... 00:07:00.465 00:07:00.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.465 ------------------------------------------------------------------------------------ 00:07:00.465 0,0 88096/s 349 MiB/s 0 0 00:07:00.465 ==================================================================================== 00:07:00.465 Total 88096/s 344 MiB/s 0 0' 00:07:00.465 00:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:00.465 00:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:00.465 00:01:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.465 00:01:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.465 00:01:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.465 00:01:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.465 00:01:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.466 00:01:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.466 00:01:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.466 00:01:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.466 00:01:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.466 00:01:14 -- accel/accel.sh@42 -- # jq -r . 00:07:00.466 [2024-11-28 00:01:14.811677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.466 [2024-11-28 00:01:14.811814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71221 ] 00:07:00.466 [2024-11-28 00:01:14.960421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.466 [2024-11-28 00:01:15.008253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val=0x1 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.466 00:01:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.466 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.466 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val=software 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val=32 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val=32 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val=1 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val=No 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:00.724 00:01:15 -- accel/accel.sh@21 -- # val= 00:07:00.724 00:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:00.724 00:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@21 -- # val= 00:07:01.662 00:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 00:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 00:01:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.662 00:01:16 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:01.662 ************************************ 00:07:01.662 END TEST accel_dif_generate_copy 00:07:01.662 ************************************ 00:07:01.662 00:01:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.662 00:07:01.662 real 0m2.819s 00:07:01.662 user 0m2.371s 00:07:01.662 sys 0m0.240s 00:07:01.662 00:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.662 00:01:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.923 00:01:16 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:01.923 00:01:16 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.923 00:01:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:01.923 00:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.923 00:01:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.923 ************************************ 00:07:01.923 START TEST accel_comp 00:07:01.923 ************************************ 00:07:01.923 00:01:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.923 00:01:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.923 00:01:16 -- accel/accel.sh@17 -- # local accel_module 00:07:01.923 00:01:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.923 00:01:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.923 00:01:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.923 00:01:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.923 00:01:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.923 00:01:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.923 00:01:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.923 00:01:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.923 00:01:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.923 00:01:16 -- accel/accel.sh@42 -- # jq -r . 00:07:01.923 [2024-11-28 00:01:16.314676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.923 [2024-11-28 00:01:16.314814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71261 ] 00:07:01.923 [2024-11-28 00:01:16.462643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.923 [2024-11-28 00:01:16.510574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.309 00:01:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.309 00:07:03.309 SPDK Configuration: 00:07:03.309 Core mask: 0x1 00:07:03.309 00:07:03.309 Accel Perf Configuration: 00:07:03.309 Workload Type: compress 00:07:03.309 Transfer size: 4096 bytes 00:07:03.309 Vector count 1 00:07:03.309 Module: software 00:07:03.309 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.309 Queue depth: 32 00:07:03.309 Allocate depth: 32 00:07:03.309 # threads/core: 1 00:07:03.309 Run time: 1 seconds 00:07:03.309 Verify: No 00:07:03.309 00:07:03.309 Running for 1 seconds... 00:07:03.309 00:07:03.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.309 ------------------------------------------------------------------------------------ 00:07:03.309 0,0 48992/s 204 MiB/s 0 0 00:07:03.309 ==================================================================================== 00:07:03.309 Total 48992/s 191 MiB/s 0 0' 00:07:03.309 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.309 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.309 00:01:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.309 00:01:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.309 00:01:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.309 00:01:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.309 00:01:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.309 00:01:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.309 00:01:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.309 00:01:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.309 00:01:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.309 00:01:17 -- accel/accel.sh@42 -- # jq -r . 00:07:03.309 [2024-11-28 00:01:17.729838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.309 [2024-11-28 00:01:17.729939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71277 ] 00:07:03.309 [2024-11-28 00:01:17.877152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.309 [2024-11-28 00:01:17.907268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=0x1 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=compress 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=software 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=32 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=32 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=1 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val=No 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:03.568 00:01:17 -- accel/accel.sh@21 -- # val= 00:07:03.568 00:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:03.568 00:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@21 -- # val= 00:07:04.506 00:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:04.506 00:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:04.506 00:01:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.506 00:01:19 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:04.506 00:01:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.506 00:07:04.506 real 0m2.760s 00:07:04.506 user 0m2.305s 00:07:04.506 sys 0m0.252s 00:07:04.506 00:01:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.506 00:01:19 -- common/autotest_common.sh@10 -- # set +x 00:07:04.506 ************************************ 00:07:04.506 END TEST accel_comp 00:07:04.506 ************************************ 00:07:04.506 00:01:19 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.506 00:01:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.506 00:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.506 00:01:19 -- common/autotest_common.sh@10 -- # set +x 00:07:04.506 ************************************ 00:07:04.506 START TEST accel_decomp 00:07:04.506 ************************************ 00:07:04.506 00:01:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.506 00:01:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.506 00:01:19 -- accel/accel.sh@17 -- # local accel_module 00:07:04.506 00:01:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.506 00:01:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:04.506 00:01:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.506 00:01:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.506 00:01:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.506 00:01:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.506 00:01:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.506 00:01:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.506 00:01:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.506 00:01:19 -- accel/accel.sh@42 -- # jq -r . 00:07:04.764 [2024-11-28 00:01:19.124503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.764 [2024-11-28 00:01:19.124605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71307 ] 00:07:04.764 [2024-11-28 00:01:19.269288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.764 [2024-11-28 00:01:19.299733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.138 00:01:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.138 00:07:06.138 SPDK Configuration: 00:07:06.138 Core mask: 0x1 00:07:06.138 00:07:06.138 Accel Perf Configuration: 00:07:06.138 Workload Type: decompress 00:07:06.138 Transfer size: 4096 bytes 00:07:06.138 Vector count 1 00:07:06.138 Module: software 00:07:06.138 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.138 Queue depth: 32 00:07:06.138 Allocate depth: 32 00:07:06.138 # threads/core: 1 00:07:06.138 Run time: 1 seconds 00:07:06.138 Verify: Yes 00:07:06.138 00:07:06.138 Running for 1 seconds... 00:07:06.138 00:07:06.138 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.138 ------------------------------------------------------------------------------------ 00:07:06.138 0,0 62336/s 114 MiB/s 0 0 00:07:06.138 ==================================================================================== 00:07:06.138 Total 62336/s 243 MiB/s 0 0' 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.138 00:01:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.138 00:01:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.138 00:01:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.138 00:01:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.138 00:01:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.138 00:01:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.138 00:01:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.138 00:01:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.138 00:01:20 -- accel/accel.sh@42 -- # jq -r . 00:07:06.138 [2024-11-28 00:01:20.476566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.138 [2024-11-28 00:01:20.476679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:07:06.138 [2024-11-28 00:01:20.624741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.138 [2024-11-28 00:01:20.654760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=0x1 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=decompress 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=software 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=32 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=32 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=1 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val=Yes 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:06.138 00:01:20 -- accel/accel.sh@21 -- # val= 00:07:06.138 00:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:06.138 00:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@21 -- # val= 00:07:07.511 00:01:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # IFS=: 00:07:07.511 00:01:21 -- accel/accel.sh@20 -- # read -r var val 00:07:07.511 00:01:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.511 00:01:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.511 00:01:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.511 00:07:07.511 real 0m2.704s 00:07:07.511 user 0m2.285s 00:07:07.511 sys 0m0.213s 00:07:07.511 ************************************ 00:07:07.511 END TEST accel_decomp 00:07:07.511 ************************************ 00:07:07.511 00:01:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.511 00:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.511 00:01:21 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.511 00:01:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:07.511 00:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.511 00:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.511 ************************************ 00:07:07.511 START TEST accel_decmop_full 00:07:07.511 ************************************ 00:07:07.511 00:01:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.511 00:01:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.511 00:01:21 -- accel/accel.sh@17 -- # local accel_module 00:07:07.511 00:01:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.511 00:01:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:07.511 00:01:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.511 00:01:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.511 00:01:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.511 00:01:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.511 00:01:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.511 00:01:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.511 00:01:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.511 00:01:21 -- accel/accel.sh@42 -- # jq -r . 00:07:07.512 [2024-11-28 00:01:21.885597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.512 [2024-11-28 00:01:21.885706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71364 ] 00:07:07.512 [2024-11-28 00:01:22.024633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.512 [2024-11-28 00:01:22.054735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.882 00:01:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.882 00:07:08.882 SPDK Configuration: 00:07:08.883 Core mask: 0x1 00:07:08.883 00:07:08.883 Accel Perf Configuration: 00:07:08.883 Workload Type: decompress 00:07:08.883 Transfer size: 111250 bytes 00:07:08.883 Vector count 1 00:07:08.883 Module: software 00:07:08.883 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.883 Queue depth: 32 00:07:08.883 Allocate depth: 32 00:07:08.883 # threads/core: 1 00:07:08.883 Run time: 1 seconds 00:07:08.883 Verify: Yes 00:07:08.883 00:07:08.883 Running for 1 seconds... 00:07:08.883 00:07:08.883 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.883 ------------------------------------------------------------------------------------ 00:07:08.883 0,0 4352/s 179 MiB/s 0 0 00:07:08.883 ==================================================================================== 00:07:08.883 Total 4352/s 461 MiB/s 0 0' 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.883 00:01:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:08.883 00:01:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.883 00:01:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.883 00:01:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.883 00:01:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.883 00:01:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.883 00:01:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.883 00:01:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.883 00:01:23 -- accel/accel.sh@42 -- # jq -r . 00:07:08.883 [2024-11-28 00:01:23.245978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.883 [2024-11-28 00:01:23.246095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:07:08.883 [2024-11-28 00:01:23.390497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.883 [2024-11-28 00:01:23.420987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=0x1 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=decompress 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=software 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=32 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=32 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=1 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val=Yes 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:08.883 00:01:23 -- accel/accel.sh@21 -- # val= 00:07:08.883 00:01:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # IFS=: 00:07:08.883 00:01:23 -- accel/accel.sh@20 -- # read -r var val 00:07:10.255 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.255 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.255 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.255 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.255 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.255 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.255 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.255 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.256 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.256 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.256 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.256 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.256 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.256 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.256 00:01:24 -- accel/accel.sh@21 -- # val= 00:07:10.256 00:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # IFS=: 00:07:10.256 00:01:24 -- accel/accel.sh@20 -- # read -r var val 00:07:10.256 00:01:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.256 ************************************ 00:07:10.256 END TEST accel_decmop_full 00:07:10.256 ************************************ 00:07:10.256 00:01:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.256 00:01:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.256 00:07:10.256 real 0m2.722s 00:07:10.256 user 0m2.302s 00:07:10.256 sys 0m0.217s 00:07:10.256 00:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.256 00:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.256 00:01:24 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.256 00:01:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:10.256 00:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.256 00:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.256 ************************************ 00:07:10.256 START TEST accel_decomp_mcore 00:07:10.256 ************************************ 00:07:10.256 00:01:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.256 00:01:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.256 00:01:24 -- accel/accel.sh@17 -- # local accel_module 00:07:10.256 00:01:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.256 00:01:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:10.256 00:01:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.256 00:01:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.256 00:01:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.256 00:01:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.256 00:01:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.256 00:01:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.256 00:01:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.256 00:01:24 -- accel/accel.sh@42 -- # jq -r . 00:07:10.256 [2024-11-28 00:01:24.654130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.256 [2024-11-28 00:01:24.654212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71421 ] 00:07:10.256 [2024-11-28 00:01:24.788245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.256 [2024-11-28 00:01:24.817062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.256 [2024-11-28 00:01:24.817320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.256 [2024-11-28 00:01:24.817442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.256 [2024-11-28 00:01:24.817535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.630 00:01:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.630 00:07:11.630 SPDK Configuration: 00:07:11.630 Core mask: 0xf 00:07:11.630 00:07:11.630 Accel Perf Configuration: 00:07:11.630 Workload Type: decompress 00:07:11.630 Transfer size: 4096 bytes 00:07:11.630 Vector count 1 00:07:11.630 Module: software 00:07:11.630 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.630 Queue depth: 32 00:07:11.630 Allocate depth: 32 00:07:11.630 # threads/core: 1 00:07:11.630 Run time: 1 seconds 00:07:11.630 Verify: Yes 00:07:11.630 00:07:11.630 Running for 1 seconds... 00:07:11.630 00:07:11.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.630 ------------------------------------------------------------------------------------ 00:07:11.630 0,0 78432/s 144 MiB/s 0 0 00:07:11.630 3,0 58816/s 108 MiB/s 0 0 00:07:11.630 2,0 58752/s 108 MiB/s 0 0 00:07:11.630 1,0 58688/s 108 MiB/s 0 0 00:07:11.630 ==================================================================================== 00:07:11.630 Total 254688/s 994 MiB/s 0 0' 00:07:11.630 00:01:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.630 00:01:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:11.630 00:01:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.630 00:01:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.630 00:01:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.630 00:01:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.630 00:01:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.630 00:01:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.630 00:01:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.630 00:01:25 -- accel/accel.sh@42 -- # jq -r . 00:07:11.630 [2024-11-28 00:01:25.985834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.630 [2024-11-28 00:01:25.985947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71440 ] 00:07:11.630 [2024-11-28 00:01:26.130981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.630 [2024-11-28 00:01:26.159870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.630 [2024-11-28 00:01:26.160195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.630 [2024-11-28 00:01:26.160382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.630 [2024-11-28 00:01:26.160452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=0xf 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=decompress 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=software 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=32 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=32 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=1 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val=Yes 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.630 00:01:26 -- accel/accel.sh@21 -- # val= 00:07:11.630 00:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.630 00:01:26 -- accel/accel.sh@20 -- # read -r var val 00:07:13.002 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.002 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.002 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.002 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.002 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.002 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.002 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.002 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@21 -- # val= 00:07:13.003 00:01:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # IFS=: 00:07:13.003 00:01:27 -- accel/accel.sh@20 -- # read -r var val 00:07:13.003 00:01:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.003 00:01:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:13.003 00:01:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.003 00:07:13.003 real 0m2.668s 00:07:13.003 user 0m8.683s 00:07:13.003 sys 0m0.229s 00:07:13.003 00:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.003 00:01:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.003 ************************************ 00:07:13.003 END TEST accel_decomp_mcore 00:07:13.003 ************************************ 00:07:13.003 00:01:27 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 00:01:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:13.003 00:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.003 00:01:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.003 ************************************ 00:07:13.003 START TEST accel_decomp_full_mcore 00:07:13.003 ************************************ 00:07:13.003 00:01:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 00:01:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.003 00:01:27 -- accel/accel.sh@17 -- # local accel_module 00:07:13.003 00:01:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 00:01:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.003 00:01:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.003 00:01:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.003 00:01:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.003 00:01:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.003 00:01:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.003 00:01:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.003 00:01:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.003 00:01:27 -- accel/accel.sh@42 -- # jq -r . 00:07:13.003 [2024-11-28 00:01:27.366075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.003 [2024-11-28 00:01:27.366176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:07:13.003 [2024-11-28 00:01:27.511818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.003 [2024-11-28 00:01:27.540756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.003 [2024-11-28 00:01:27.540935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.003 [2024-11-28 00:01:27.541006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.003 [2024-11-28 00:01:27.541049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.378 00:01:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.378 00:07:14.378 SPDK Configuration: 00:07:14.378 Core mask: 0xf 00:07:14.378 00:07:14.378 Accel Perf Configuration: 00:07:14.378 Workload Type: decompress 00:07:14.378 Transfer size: 111250 bytes 00:07:14.378 Vector count 1 00:07:14.378 Module: software 00:07:14.378 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.378 Queue depth: 32 00:07:14.378 Allocate depth: 32 00:07:14.378 # threads/core: 1 00:07:14.378 Run time: 1 seconds 00:07:14.378 Verify: Yes 00:07:14.378 00:07:14.378 Running for 1 seconds... 00:07:14.378 00:07:14.378 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.378 ------------------------------------------------------------------------------------ 00:07:14.378 0,0 5792/s 239 MiB/s 0 0 00:07:14.378 3,0 4352/s 179 MiB/s 0 0 00:07:14.378 2,0 4320/s 178 MiB/s 0 0 00:07:14.378 1,0 4352/s 179 MiB/s 0 0 00:07:14.378 ==================================================================================== 00:07:14.378 Total 18816/s 1996 MiB/s 0 0' 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.378 00:01:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.378 00:01:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.378 00:01:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.378 00:01:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.378 00:01:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.378 00:01:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.378 00:01:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.378 00:01:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.378 00:01:28 -- accel/accel.sh@42 -- # jq -r . 00:07:14.378 [2024-11-28 00:01:28.719790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.378 [2024-11-28 00:01:28.720001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:07:14.378 [2024-11-28 00:01:28.863818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.378 [2024-11-28 00:01:28.892614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.378 [2024-11-28 00:01:28.892926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.378 [2024-11-28 00:01:28.893045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.378 [2024-11-28 00:01:28.893089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=0xf 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=decompress 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=software 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=32 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=32 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=1 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val=Yes 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:14.378 00:01:28 -- accel/accel.sh@21 -- # val= 00:07:14.378 00:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # IFS=: 00:07:14.378 00:01:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@21 -- # val= 00:07:15.821 00:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # IFS=: 00:07:15.821 00:01:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.821 00:01:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.821 00:01:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.821 00:01:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.821 00:07:15.821 real 0m2.713s 00:07:15.821 user 0m8.796s 00:07:15.821 sys 0m0.234s 00:07:15.821 00:01:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.821 ************************************ 00:07:15.821 END TEST accel_decomp_full_mcore 00:07:15.821 ************************************ 00:07:15.821 00:01:30 -- common/autotest_common.sh@10 -- # set +x 00:07:15.821 00:01:30 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.821 00:01:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:15.821 00:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.821 00:01:30 -- common/autotest_common.sh@10 -- # set +x 00:07:15.821 ************************************ 00:07:15.821 START TEST accel_decomp_mthread 00:07:15.821 ************************************ 00:07:15.821 00:01:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.821 00:01:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.821 00:01:30 -- accel/accel.sh@17 -- # local accel_module 00:07:15.821 00:01:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.821 00:01:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.821 00:01:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.821 00:01:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.821 00:01:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.821 00:01:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.821 00:01:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.821 00:01:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.821 00:01:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.821 00:01:30 -- accel/accel.sh@42 -- # jq -r . 00:07:15.821 [2024-11-28 00:01:30.138748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.821 [2024-11-28 00:01:30.138857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71535 ] 00:07:15.821 [2024-11-28 00:01:30.286145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.821 [2024-11-28 00:01:30.313194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.198 00:01:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.198 00:07:17.198 SPDK Configuration: 00:07:17.198 Core mask: 0x1 00:07:17.198 00:07:17.198 Accel Perf Configuration: 00:07:17.198 Workload Type: decompress 00:07:17.198 Transfer size: 4096 bytes 00:07:17.198 Vector count 1 00:07:17.198 Module: software 00:07:17.198 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.198 Queue depth: 32 00:07:17.198 Allocate depth: 32 00:07:17.198 # threads/core: 2 00:07:17.198 Run time: 1 seconds 00:07:17.198 Verify: Yes 00:07:17.198 00:07:17.198 Running for 1 seconds... 00:07:17.198 00:07:17.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.198 ------------------------------------------------------------------------------------ 00:07:17.198 0,1 41696/s 76 MiB/s 0 0 00:07:17.198 0,0 41568/s 76 MiB/s 0 0 00:07:17.198 ==================================================================================== 00:07:17.198 Total 83264/s 325 MiB/s 0 0' 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:17.198 00:01:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:17.198 00:01:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.198 00:01:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.198 00:01:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.198 00:01:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.198 00:01:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.198 00:01:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.198 00:01:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.198 00:01:31 -- accel/accel.sh@42 -- # jq -r . 00:07:17.198 [2024-11-28 00:01:31.476499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.198 [2024-11-28 00:01:31.476606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71556 ] 00:07:17.198 [2024-11-28 00:01:31.622921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.198 [2024-11-28 00:01:31.651777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=0x1 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=decompress 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=software 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=32 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=32 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val=2 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.198 00:01:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.198 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.198 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.199 00:01:31 -- accel/accel.sh@21 -- # val=Yes 00:07:17.199 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.199 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.199 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:17.199 00:01:31 -- accel/accel.sh@21 -- # val= 00:07:17.199 00:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # IFS=: 00:07:17.199 00:01:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 ************************************ 00:07:18.574 END TEST accel_decomp_mthread 00:07:18.574 ************************************ 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@21 -- # val= 00:07:18.574 00:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # IFS=: 00:07:18.574 00:01:32 -- accel/accel.sh@20 -- # read -r var val 00:07:18.574 00:01:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.574 00:01:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.574 00:01:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.574 00:07:18.574 real 0m2.675s 00:07:18.574 user 0m2.284s 00:07:18.574 sys 0m0.191s 00:07:18.574 00:01:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.574 00:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.574 00:01:32 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.574 00:01:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:18.574 00:01:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.574 00:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.574 ************************************ 00:07:18.574 START TEST accel_deomp_full_mthread 00:07:18.574 ************************************ 00:07:18.574 00:01:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.574 00:01:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.574 00:01:32 -- accel/accel.sh@17 -- # local accel_module 00:07:18.574 00:01:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.574 00:01:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.574 00:01:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.574 00:01:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.574 00:01:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.574 00:01:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.574 00:01:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.574 00:01:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.574 00:01:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.574 00:01:32 -- accel/accel.sh@42 -- # jq -r . 00:07:18.574 [2024-11-28 00:01:32.853006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.574 [2024-11-28 00:01:32.853209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71586 ] 00:07:18.574 [2024-11-28 00:01:32.999437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.574 [2024-11-28 00:01:33.030183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.950 00:01:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.950 00:07:19.950 SPDK Configuration: 00:07:19.950 Core mask: 0x1 00:07:19.950 00:07:19.950 Accel Perf Configuration: 00:07:19.950 Workload Type: decompress 00:07:19.950 Transfer size: 111250 bytes 00:07:19.950 Vector count 1 00:07:19.950 Module: software 00:07:19.950 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.950 Queue depth: 32 00:07:19.950 Allocate depth: 32 00:07:19.950 # threads/core: 2 00:07:19.950 Run time: 1 seconds 00:07:19.950 Verify: Yes 00:07:19.950 00:07:19.950 Running for 1 seconds... 00:07:19.950 00:07:19.950 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.950 ------------------------------------------------------------------------------------ 00:07:19.950 0,1 2240/s 92 MiB/s 0 0 00:07:19.950 0,0 2176/s 89 MiB/s 0 0 00:07:19.950 ==================================================================================== 00:07:19.950 Total 4416/s 468 MiB/s 0 0' 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.950 00:01:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.950 00:01:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.950 00:01:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.950 00:01:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.950 00:01:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.950 00:01:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.950 00:01:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.950 00:01:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.950 00:01:34 -- accel/accel.sh@42 -- # jq -r . 00:07:19.950 [2024-11-28 00:01:34.241117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.950 [2024-11-28 00:01:34.241346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71606 ] 00:07:19.950 [2024-11-28 00:01:34.387159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.950 [2024-11-28 00:01:34.418190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=0x1 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=decompress 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=software 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=32 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=32 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=2 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val=Yes 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.950 00:01:34 -- accel/accel.sh@21 -- # val= 00:07:19.950 00:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # IFS=: 00:07:19.950 00:01:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@21 -- # val= 00:07:21.325 00:01:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # IFS=: 00:07:21.325 00:01:35 -- accel/accel.sh@20 -- # read -r var val 00:07:21.325 00:01:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.325 00:01:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:21.325 00:01:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.325 00:07:21.325 real 0m2.768s 00:07:21.325 user 0m2.367s 00:07:21.325 sys 0m0.200s 00:07:21.325 ************************************ 00:07:21.325 END TEST accel_deomp_full_mthread 00:07:21.325 ************************************ 00:07:21.325 00:01:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.325 00:01:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.325 00:01:35 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:21.325 00:01:35 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.325 00:01:35 -- accel/accel.sh@129 -- # build_accel_config 00:07:21.325 00:01:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.325 00:01:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.325 00:01:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:21.325 00:01:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.325 00:01:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.325 00:01:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.325 00:01:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.325 00:01:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.325 00:01:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.325 00:01:35 -- accel/accel.sh@42 -- # jq -r . 00:07:21.325 ************************************ 00:07:21.325 START TEST accel_dif_functional_tests 00:07:21.325 ************************************ 00:07:21.325 00:01:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:21.325 [2024-11-28 00:01:35.683039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.325 [2024-11-28 00:01:35.683141] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71643 ] 00:07:21.325 [2024-11-28 00:01:35.827548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.325 [2024-11-28 00:01:35.859217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.325 [2024-11-28 00:01:35.859522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.325 [2024-11-28 00:01:35.859555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.325 00:07:21.325 00:07:21.325 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.325 http://cunit.sourceforge.net/ 00:07:21.325 00:07:21.325 00:07:21.325 Suite: accel_dif 00:07:21.325 Test: verify: DIF generated, GUARD check ...passed 00:07:21.325 Test: verify: DIF generated, APPTAG check ...passed 00:07:21.325 Test: verify: DIF generated, REFTAG check ...passed 00:07:21.326 Test: verify: DIF not generated, GUARD check ...passed 00:07:21.326 Test: verify: DIF not generated, APPTAG check ...[2024-11-28 00:01:35.909747] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.326 [2024-11-28 00:01:35.909803] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.326 passed 00:07:21.326 Test: verify: DIF not generated, REFTAG check ...[2024-11-28 00:01:35.909857] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.326 [2024-11-28 00:01:35.909918] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.326 [2024-11-28 00:01:35.909966] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.326 passed 00:07:21.326 Test: verify: APPTAG correct, APPTAG check ...[2024-11-28 00:01:35.910008] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.326 passed 00:07:21.326 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-28 00:01:35.910144] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:21.326 passed 00:07:21.326 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:21.326 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:21.326 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:21.326 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:21.326 Test: generate copy: DIF generated, GUARD check ...[2024-11-28 00:01:35.910375] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:21.326 passed 00:07:21.326 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:21.326 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:21.326 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:21.326 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:21.326 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:21.326 Test: generate copy: iovecs-len validate ...passed 00:07:21.326 Test: generate copy: buffer alignment validate ...[2024-11-28 00:01:35.910750] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:21.326 passed 00:07:21.326 00:07:21.326 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.326 suites 1 1 n/a 0 0 00:07:21.326 tests 20 20 20 0 0 00:07:21.326 asserts 204 204 204 0 n/a 00:07:21.326 00:07:21.326 Elapsed time = 0.003 seconds 00:07:21.584 00:07:21.584 real 0m0.417s 00:07:21.584 user 0m0.438s 00:07:21.584 sys 0m0.130s 00:07:21.584 00:01:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.584 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:07:21.584 ************************************ 00:07:21.584 END TEST accel_dif_functional_tests 00:07:21.584 ************************************ 00:07:21.584 00:07:21.584 real 0m58.145s 00:07:21.584 user 1m1.379s 00:07:21.584 sys 0m5.739s 00:07:21.584 00:01:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.584 ************************************ 00:07:21.584 END TEST accel 00:07:21.584 ************************************ 00:07:21.584 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:07:21.584 00:01:36 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.584 00:01:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.584 00:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.584 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:07:21.584 ************************************ 00:07:21.584 START TEST accel_rpc 00:07:21.584 ************************************ 00:07:21.584 00:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.584 * Looking for test storage... 00:07:21.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:21.584 00:01:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:21.584 00:01:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:21.584 00:01:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:21.843 00:01:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:21.843 00:01:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:21.843 00:01:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:21.843 00:01:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:21.843 00:01:36 -- scripts/common.sh@335 -- # IFS=.-: 00:07:21.843 00:01:36 -- scripts/common.sh@335 -- # read -ra ver1 00:07:21.843 00:01:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.843 00:01:36 -- scripts/common.sh@336 -- # read -ra ver2 00:07:21.843 00:01:36 -- scripts/common.sh@337 -- # local 'op=<' 00:07:21.843 00:01:36 -- scripts/common.sh@339 -- # ver1_l=2 00:07:21.843 00:01:36 -- scripts/common.sh@340 -- # ver2_l=1 00:07:21.843 00:01:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:21.843 00:01:36 -- scripts/common.sh@343 -- # case "$op" in 00:07:21.843 00:01:36 -- scripts/common.sh@344 -- # : 1 00:07:21.843 00:01:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:21.843 00:01:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.843 00:01:36 -- scripts/common.sh@364 -- # decimal 1 00:07:21.843 00:01:36 -- scripts/common.sh@352 -- # local d=1 00:07:21.843 00:01:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.843 00:01:36 -- scripts/common.sh@354 -- # echo 1 00:07:21.843 00:01:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:21.843 00:01:36 -- scripts/common.sh@365 -- # decimal 2 00:07:21.843 00:01:36 -- scripts/common.sh@352 -- # local d=2 00:07:21.843 00:01:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.843 00:01:36 -- scripts/common.sh@354 -- # echo 2 00:07:21.843 00:01:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:21.843 00:01:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:21.843 00:01:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:21.843 00:01:36 -- scripts/common.sh@367 -- # return 0 00:07:21.843 00:01:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.843 00:01:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 00:01:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 00:01:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 00:01:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 00:01:36 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.843 00:01:36 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71715 00:07:21.843 00:01:36 -- accel/accel_rpc.sh@15 -- # waitforlisten 71715 00:07:21.843 00:01:36 -- common/autotest_common.sh@829 -- # '[' -z 71715 ']' 00:07:21.843 00:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.843 00:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.843 00:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.843 00:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.843 00:01:36 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:21.843 00:01:36 -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 [2024-11-28 00:01:36.288797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.843 [2024-11-28 00:01:36.289342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:07:21.843 [2024-11-28 00:01:36.435782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.102 [2024-11-28 00:01:36.467049] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.102 [2024-11-28 00:01:36.467238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.669 00:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.669 00:01:37 -- common/autotest_common.sh@862 -- # return 0 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:22.669 00:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.669 00:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 ************************************ 00:07:22.669 START TEST accel_assign_opcode 00:07:22.669 ************************************ 00:07:22.669 00:01:37 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:22.669 00:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 [2024-11-28 00:01:37.075852] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:22.669 00:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:22.669 00:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 [2024-11-28 00:01:37.083844] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:22.669 00:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:22.669 00:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 00:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.669 00:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.669 00:01:37 -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 00:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.669 software 00:07:22.669 ************************************ 00:07:22.669 END TEST accel_assign_opcode 00:07:22.669 ************************************ 00:07:22.669 00:07:22.669 real 0m0.186s 00:07:22.669 user 0m0.036s 00:07:22.669 sys 0m0.008s 00:07:22.669 00:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.669 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.927 00:01:37 -- accel/accel_rpc.sh@55 -- # killprocess 71715 00:07:22.927 00:01:37 -- common/autotest_common.sh@936 -- # '[' -z 71715 ']' 00:07:22.927 00:01:37 -- common/autotest_common.sh@940 -- # kill -0 71715 00:07:22.927 00:01:37 -- common/autotest_common.sh@941 -- # uname 00:07:22.927 00:01:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.927 00:01:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71715 00:07:22.927 killing process with pid 71715 00:07:22.927 00:01:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.927 00:01:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.927 00:01:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71715' 00:07:22.927 00:01:37 -- common/autotest_common.sh@955 -- # kill 71715 00:07:22.927 00:01:37 -- common/autotest_common.sh@960 -- # wait 71715 00:07:23.185 00:07:23.185 real 0m1.466s 00:07:23.185 user 0m1.472s 00:07:23.185 sys 0m0.314s 00:07:23.185 00:01:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.185 ************************************ 00:07:23.185 END TEST accel_rpc 00:07:23.185 ************************************ 00:07:23.185 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.185 00:01:37 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.185 00:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.185 00:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.185 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.185 ************************************ 00:07:23.185 START TEST app_cmdline 00:07:23.185 ************************************ 00:07:23.185 00:01:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.185 * Looking for test storage... 00:07:23.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.185 00:01:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.185 00:01:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.185 00:01:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.185 00:01:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.186 00:01:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.186 00:01:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.186 00:01:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.186 00:01:37 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.186 00:01:37 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.186 00:01:37 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.186 00:01:37 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.186 00:01:37 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.186 00:01:37 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.186 00:01:37 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.186 00:01:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.186 00:01:37 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.186 00:01:37 -- scripts/common.sh@344 -- # : 1 00:07:23.186 00:01:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.186 00:01:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.186 00:01:37 -- scripts/common.sh@364 -- # decimal 1 00:07:23.186 00:01:37 -- scripts/common.sh@352 -- # local d=1 00:07:23.186 00:01:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.186 00:01:37 -- scripts/common.sh@354 -- # echo 1 00:07:23.186 00:01:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.186 00:01:37 -- scripts/common.sh@365 -- # decimal 2 00:07:23.186 00:01:37 -- scripts/common.sh@352 -- # local d=2 00:07:23.186 00:01:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.186 00:01:37 -- scripts/common.sh@354 -- # echo 2 00:07:23.186 00:01:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.186 00:01:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.186 00:01:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.186 00:01:37 -- scripts/common.sh@367 -- # return 0 00:07:23.186 00:01:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.186 00:01:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.186 --rc genhtml_branch_coverage=1 00:07:23.186 --rc genhtml_function_coverage=1 00:07:23.186 --rc genhtml_legend=1 00:07:23.186 --rc geninfo_all_blocks=1 00:07:23.186 --rc geninfo_unexecuted_blocks=1 00:07:23.186 00:07:23.186 ' 00:07:23.186 00:01:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.186 --rc genhtml_branch_coverage=1 00:07:23.186 --rc genhtml_function_coverage=1 00:07:23.186 --rc genhtml_legend=1 00:07:23.186 --rc geninfo_all_blocks=1 00:07:23.186 --rc geninfo_unexecuted_blocks=1 00:07:23.186 00:07:23.186 ' 00:07:23.186 00:01:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.186 --rc genhtml_branch_coverage=1 00:07:23.186 --rc genhtml_function_coverage=1 00:07:23.186 --rc genhtml_legend=1 00:07:23.186 --rc geninfo_all_blocks=1 00:07:23.186 --rc geninfo_unexecuted_blocks=1 00:07:23.186 00:07:23.186 ' 00:07:23.186 00:01:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.186 --rc genhtml_branch_coverage=1 00:07:23.186 --rc genhtml_function_coverage=1 00:07:23.186 --rc genhtml_legend=1 00:07:23.186 --rc geninfo_all_blocks=1 00:07:23.186 --rc geninfo_unexecuted_blocks=1 00:07:23.186 00:07:23.186 ' 00:07:23.186 00:01:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.186 00:01:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71811 00:07:23.186 00:01:37 -- app/cmdline.sh@18 -- # waitforlisten 71811 00:07:23.186 00:01:37 -- common/autotest_common.sh@829 -- # '[' -z 71811 ']' 00:07:23.186 00:01:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.186 00:01:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.186 00:01:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.186 00:01:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.186 00:01:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.186 00:01:37 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.444 [2024-11-28 00:01:37.826515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.444 [2024-11-28 00:01:37.826645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71811 ] 00:07:23.444 [2024-11-28 00:01:37.975791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.444 [2024-11-28 00:01:38.010255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.444 [2024-11-28 00:01:38.010447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.378 00:01:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.378 00:01:38 -- common/autotest_common.sh@862 -- # return 0 00:07:24.378 00:01:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.378 { 00:07:24.378 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:24.378 "fields": { 00:07:24.378 "major": 24, 00:07:24.378 "minor": 1, 00:07:24.378 "patch": 1, 00:07:24.378 "suffix": "-pre", 00:07:24.378 "commit": "c13c99a5e" 00:07:24.378 } 00:07:24.378 } 00:07:24.378 00:01:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.378 00:01:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.378 00:01:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.378 00:01:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.378 00:01:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.378 00:01:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.379 00:01:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.379 00:01:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.379 00:01:38 -- app/cmdline.sh@26 -- # sort 00:07:24.379 00:01:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.379 00:01:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.379 00:01:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.379 00:01:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.379 00:01:38 -- common/autotest_common.sh@650 -- # local es=0 00:07:24.379 00:01:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.379 00:01:38 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.379 00:01:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.379 00:01:38 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.379 00:01:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.379 00:01:38 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.379 00:01:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.379 00:01:38 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.379 00:01:38 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.379 00:01:38 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.637 request: 00:07:24.637 { 00:07:24.637 "method": "env_dpdk_get_mem_stats", 00:07:24.637 "req_id": 1 00:07:24.637 } 00:07:24.637 Got JSON-RPC error response 00:07:24.637 response: 00:07:24.637 { 00:07:24.637 "code": -32601, 00:07:24.637 "message": "Method not found" 00:07:24.637 } 00:07:24.637 00:01:38 -- common/autotest_common.sh@653 -- # es=1 00:07:24.637 00:01:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.637 00:01:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.637 00:01:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.637 00:01:38 -- app/cmdline.sh@1 -- # killprocess 71811 00:07:24.637 00:01:38 -- common/autotest_common.sh@936 -- # '[' -z 71811 ']' 00:07:24.637 00:01:38 -- common/autotest_common.sh@940 -- # kill -0 71811 00:07:24.637 00:01:38 -- common/autotest_common.sh@941 -- # uname 00:07:24.637 00:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.637 00:01:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71811 00:07:24.637 00:01:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.637 killing process with pid 71811 00:07:24.637 00:01:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.637 00:01:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71811' 00:07:24.637 00:01:39 -- common/autotest_common.sh@955 -- # kill 71811 00:07:24.637 00:01:39 -- common/autotest_common.sh@960 -- # wait 71811 00:07:24.895 00:07:24.895 real 0m1.640s 00:07:24.895 user 0m1.898s 00:07:24.895 sys 0m0.373s 00:07:24.895 00:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.895 ************************************ 00:07:24.896 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.896 END TEST app_cmdline 00:07:24.896 ************************************ 00:07:24.896 00:01:39 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:24.896 00:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.896 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.896 ************************************ 00:07:24.896 START TEST version 00:07:24.896 ************************************ 00:07:24.896 00:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:24.896 * Looking for test storage... 00:07:24.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:24.896 00:01:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:24.896 00:01:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:24.896 00:01:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:24.896 00:01:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:24.896 00:01:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:24.896 00:01:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:24.896 00:01:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:24.896 00:01:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:24.896 00:01:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.896 00:01:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:24.896 00:01:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:24.896 00:01:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:24.896 00:01:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:24.896 00:01:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:24.896 00:01:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:24.896 00:01:39 -- scripts/common.sh@344 -- # : 1 00:07:24.896 00:01:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:24.896 00:01:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.896 00:01:39 -- scripts/common.sh@364 -- # decimal 1 00:07:24.896 00:01:39 -- scripts/common.sh@352 -- # local d=1 00:07:24.896 00:01:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.896 00:01:39 -- scripts/common.sh@354 -- # echo 1 00:07:24.896 00:01:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:24.896 00:01:39 -- scripts/common.sh@365 -- # decimal 2 00:07:24.896 00:01:39 -- scripts/common.sh@352 -- # local d=2 00:07:24.896 00:01:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.896 00:01:39 -- scripts/common.sh@354 -- # echo 2 00:07:24.896 00:01:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:24.896 00:01:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:24.896 00:01:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:24.896 00:01:39 -- scripts/common.sh@367 -- # return 0 00:07:24.896 00:01:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:24.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.896 --rc genhtml_branch_coverage=1 00:07:24.896 --rc genhtml_function_coverage=1 00:07:24.896 --rc genhtml_legend=1 00:07:24.896 --rc geninfo_all_blocks=1 00:07:24.896 --rc geninfo_unexecuted_blocks=1 00:07:24.896 00:07:24.896 ' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:24.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.896 --rc genhtml_branch_coverage=1 00:07:24.896 --rc genhtml_function_coverage=1 00:07:24.896 --rc genhtml_legend=1 00:07:24.896 --rc geninfo_all_blocks=1 00:07:24.896 --rc geninfo_unexecuted_blocks=1 00:07:24.896 00:07:24.896 ' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:24.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.896 --rc genhtml_branch_coverage=1 00:07:24.896 --rc genhtml_function_coverage=1 00:07:24.896 --rc genhtml_legend=1 00:07:24.896 --rc geninfo_all_blocks=1 00:07:24.896 --rc geninfo_unexecuted_blocks=1 00:07:24.896 00:07:24.896 ' 00:07:24.896 00:01:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:24.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.896 --rc genhtml_branch_coverage=1 00:07:24.896 --rc genhtml_function_coverage=1 00:07:24.896 --rc genhtml_legend=1 00:07:24.896 --rc geninfo_all_blocks=1 00:07:24.896 --rc geninfo_unexecuted_blocks=1 00:07:24.896 00:07:24.896 ' 00:07:24.896 00:01:39 -- app/version.sh@17 -- # get_header_version major 00:07:24.896 00:01:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:24.896 00:01:39 -- app/version.sh@14 -- # cut -f2 00:07:24.896 00:01:39 -- app/version.sh@14 -- # tr -d '"' 00:07:24.896 00:01:39 -- app/version.sh@17 -- # major=24 00:07:24.896 00:01:39 -- app/version.sh@18 -- # get_header_version minor 00:07:24.896 00:01:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:24.896 00:01:39 -- app/version.sh@14 -- # tr -d '"' 00:07:24.896 00:01:39 -- app/version.sh@14 -- # cut -f2 00:07:24.896 00:01:39 -- app/version.sh@18 -- # minor=1 00:07:24.896 00:01:39 -- app/version.sh@19 -- # get_header_version patch 00:07:24.896 00:01:39 -- app/version.sh@14 -- # cut -f2 00:07:24.896 00:01:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:24.896 00:01:39 -- app/version.sh@14 -- # tr -d '"' 00:07:24.896 00:01:39 -- app/version.sh@19 -- # patch=1 00:07:24.896 00:01:39 -- app/version.sh@20 -- # get_header_version suffix 00:07:24.896 00:01:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:24.896 00:01:39 -- app/version.sh@14 -- # cut -f2 00:07:24.896 00:01:39 -- app/version.sh@14 -- # tr -d '"' 00:07:24.896 00:01:39 -- app/version.sh@20 -- # suffix=-pre 00:07:24.896 00:01:39 -- app/version.sh@22 -- # version=24.1 00:07:24.896 00:01:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:24.896 00:01:39 -- app/version.sh@25 -- # version=24.1.1 00:07:24.896 00:01:39 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:24.896 00:01:39 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:24.896 00:01:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.155 00:01:39 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:25.155 00:01:39 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:25.155 00:07:25.155 real 0m0.191s 00:07:25.155 user 0m0.120s 00:07:25.155 sys 0m0.097s 00:07:25.155 00:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.155 ************************************ 00:07:25.155 END TEST version 00:07:25.155 ************************************ 00:07:25.155 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.155 00:01:39 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:25.155 00:01:39 -- spdk/autotest.sh@191 -- # uname -s 00:07:25.155 00:01:39 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:25.155 00:01:39 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.155 00:01:39 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.155 00:01:39 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:25.155 00:01:39 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:25.155 00:01:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.155 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.155 ************************************ 00:07:25.155 START TEST blockdev_nvme 00:07:25.155 ************************************ 00:07:25.155 00:01:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:25.155 * Looking for test storage... 00:07:25.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:25.155 00:01:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.155 00:01:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.155 00:01:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.155 00:01:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.155 00:01:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.155 00:01:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.155 00:01:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.155 00:01:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.155 00:01:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.155 00:01:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.155 00:01:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.155 00:01:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.155 00:01:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.155 00:01:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.155 00:01:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.155 00:01:39 -- scripts/common.sh@344 -- # : 1 00:07:25.155 00:01:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.155 00:01:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.155 00:01:39 -- scripts/common.sh@364 -- # decimal 1 00:07:25.155 00:01:39 -- scripts/common.sh@352 -- # local d=1 00:07:25.155 00:01:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.155 00:01:39 -- scripts/common.sh@354 -- # echo 1 00:07:25.155 00:01:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.155 00:01:39 -- scripts/common.sh@365 -- # decimal 2 00:07:25.155 00:01:39 -- scripts/common.sh@352 -- # local d=2 00:07:25.155 00:01:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.155 00:01:39 -- scripts/common.sh@354 -- # echo 2 00:07:25.155 00:01:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.155 00:01:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.155 00:01:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.155 00:01:39 -- scripts/common.sh@367 -- # return 0 00:07:25.155 00:01:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.155 --rc genhtml_branch_coverage=1 00:07:25.155 --rc genhtml_function_coverage=1 00:07:25.155 --rc genhtml_legend=1 00:07:25.155 --rc geninfo_all_blocks=1 00:07:25.155 --rc geninfo_unexecuted_blocks=1 00:07:25.155 00:07:25.155 ' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.155 --rc genhtml_branch_coverage=1 00:07:25.155 --rc genhtml_function_coverage=1 00:07:25.155 --rc genhtml_legend=1 00:07:25.155 --rc geninfo_all_blocks=1 00:07:25.155 --rc geninfo_unexecuted_blocks=1 00:07:25.155 00:07:25.155 ' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.155 --rc genhtml_branch_coverage=1 00:07:25.155 --rc genhtml_function_coverage=1 00:07:25.155 --rc genhtml_legend=1 00:07:25.155 --rc geninfo_all_blocks=1 00:07:25.155 --rc geninfo_unexecuted_blocks=1 00:07:25.155 00:07:25.155 ' 00:07:25.155 00:01:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.155 --rc genhtml_branch_coverage=1 00:07:25.155 --rc genhtml_function_coverage=1 00:07:25.155 --rc genhtml_legend=1 00:07:25.155 --rc geninfo_all_blocks=1 00:07:25.155 --rc geninfo_unexecuted_blocks=1 00:07:25.155 00:07:25.155 ' 00:07:25.155 00:01:39 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:25.155 00:01:39 -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.155 00:01:39 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:25.155 00:01:39 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:25.155 00:01:39 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:25.155 00:01:39 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:25.155 00:01:39 -- bdev/blockdev.sh@18 -- # : 00:07:25.155 00:01:39 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:25.155 00:01:39 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:25.155 00:01:39 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:25.155 00:01:39 -- bdev/blockdev.sh@672 -- # uname -s 00:07:25.155 00:01:39 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:25.155 00:01:39 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:25.155 00:01:39 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:25.155 00:01:39 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:25.155 00:01:39 -- bdev/blockdev.sh@682 -- # dek= 00:07:25.156 00:01:39 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:25.156 00:01:39 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:25.156 00:01:39 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:25.156 00:01:39 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:25.156 00:01:39 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:25.156 00:01:39 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:25.156 00:01:39 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=71970 00:07:25.156 00:01:39 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:25.156 00:01:39 -- bdev/blockdev.sh@47 -- # waitforlisten 71970 00:07:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.156 00:01:39 -- common/autotest_common.sh@829 -- # '[' -z 71970 ']' 00:07:25.156 00:01:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.156 00:01:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.156 00:01:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.156 00:01:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.156 00:01:39 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:25.156 00:01:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.414 [2024-11-28 00:01:39.798039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.414 [2024-11-28 00:01:39.798151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71970 ] 00:07:25.414 [2024-11-28 00:01:39.948064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.414 [2024-11-28 00:01:39.979981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.414 [2024-11-28 00:01:39.980172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.347 00:01:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.347 00:01:40 -- common/autotest_common.sh@862 -- # return 0 00:07:26.347 00:01:40 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:26.347 00:01:40 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:26.347 00:01:40 -- bdev/blockdev.sh@79 -- # local json 00:07:26.347 00:01:40 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:26.347 00:01:40 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:26.347 00:01:40 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:26.347 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.347 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.347 00:01:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.347 00:01:40 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:26.347 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.347 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.347 00:01:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.347 00:01:40 -- bdev/blockdev.sh@738 -- # cat 00:07:26.606 00:01:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:26.606 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.606 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.606 00:01:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.606 00:01:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:26.606 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.606 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.606 00:01:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.606 00:01:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:26.606 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.606 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.606 00:01:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.606 00:01:40 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:26.606 00:01:40 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:26.606 00:01:40 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:26.606 00:01:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.606 00:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.606 00:01:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.606 00:01:41 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:26.606 00:01:41 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:26.607 00:01:41 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d3503b53-c9f5-489f-b986-122f53f6d24f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d3503b53-c9f5-489f-b986-122f53f6d24f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d1cd06bb-b760-4339-9a56-af42f143841a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d1cd06bb-b760-4339-9a56-af42f143841a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4a20b8ed-fb3e-470b-b1e6-31fb1675f5f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a20b8ed-fb3e-470b-b1e6-31fb1675f5f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6910909c-75a7-4fa0-b72e-fff477e91295"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6910909c-75a7-4fa0-b72e-fff477e91295",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4c5c43bb-1135-471a-83b3-408db6a557d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c5c43bb-1135-471a-83b3-408db6a557d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2c013a52-7076-43e5-97da-1574d1f0f2d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c013a52-7076-43e5-97da-1574d1f0f2d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:26.607 00:01:41 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:26.607 00:01:41 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:26.607 00:01:41 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:26.607 00:01:41 -- bdev/blockdev.sh@752 -- # killprocess 71970 00:07:26.607 00:01:41 -- common/autotest_common.sh@936 -- # '[' -z 71970 ']' 00:07:26.607 00:01:41 -- common/autotest_common.sh@940 -- # kill -0 71970 00:07:26.607 00:01:41 -- common/autotest_common.sh@941 -- # uname 00:07:26.607 00:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.607 00:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71970 00:07:26.607 00:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.607 killing process with pid 71970 00:07:26.607 00:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.607 00:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71970' 00:07:26.607 00:01:41 -- common/autotest_common.sh@955 -- # kill 71970 00:07:26.607 00:01:41 -- common/autotest_common.sh@960 -- # wait 71970 00:07:26.865 00:01:41 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:26.865 00:01:41 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:26.865 00:01:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:26.865 00:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.865 00:01:41 -- common/autotest_common.sh@10 -- # set +x 00:07:26.865 ************************************ 00:07:26.865 START TEST bdev_hello_world 00:07:26.865 ************************************ 00:07:26.865 00:01:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:26.865 [2024-11-28 00:01:41.411105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.865 [2024-11-28 00:01:41.411213] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72037 ] 00:07:27.124 [2024-11-28 00:01:41.558094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.124 [2024-11-28 00:01:41.588599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.382 [2024-11-28 00:01:41.929196] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:27.382 [2024-11-28 00:01:41.929250] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:27.382 [2024-11-28 00:01:41.929268] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:27.382 [2024-11-28 00:01:41.931218] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:27.382 [2024-11-28 00:01:41.931984] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:27.382 [2024-11-28 00:01:41.932017] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:27.382 [2024-11-28 00:01:41.932283] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:27.382 00:07:27.382 [2024-11-28 00:01:41.932300] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:27.641 00:07:27.641 real 0m0.729s 00:07:27.641 user 0m0.477s 00:07:27.641 sys 0m0.148s 00:07:27.641 ************************************ 00:07:27.641 END TEST bdev_hello_world 00:07:27.641 ************************************ 00:07:27.641 00:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.641 00:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.641 00:01:42 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:27.641 00:01:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:27.641 00:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.641 00:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.641 ************************************ 00:07:27.641 START TEST bdev_bounds 00:07:27.641 ************************************ 00:07:27.641 00:01:42 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:27.641 00:01:42 -- bdev/blockdev.sh@288 -- # bdevio_pid=72063 00:07:27.641 00:01:42 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:27.641 Process bdevio pid: 72063 00:07:27.641 00:01:42 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 72063' 00:07:27.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.641 00:01:42 -- bdev/blockdev.sh@291 -- # waitforlisten 72063 00:07:27.641 00:01:42 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:27.641 00:01:42 -- common/autotest_common.sh@829 -- # '[' -z 72063 ']' 00:07:27.641 00:01:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.641 00:01:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.641 00:01:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.641 00:01:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.641 00:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.641 [2024-11-28 00:01:42.209853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.641 [2024-11-28 00:01:42.209964] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72063 ] 00:07:27.900 [2024-11-28 00:01:42.356904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.900 [2024-11-28 00:01:42.389701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.900 [2024-11-28 00:01:42.389933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.900 [2024-11-28 00:01:42.390032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.465 00:01:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.465 00:01:43 -- common/autotest_common.sh@862 -- # return 0 00:07:28.465 00:01:43 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:28.761 I/O targets: 00:07:28.761 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:28.761 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:28.761 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:28.761 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:28.761 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:28.761 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:28.761 00:07:28.761 00:07:28.761 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.761 http://cunit.sourceforge.net/ 00:07:28.761 00:07:28.761 00:07:28.761 Suite: bdevio tests on: Nvme3n1 00:07:28.761 Test: blockdev write read block ...passed 00:07:28.761 Test: blockdev write zeroes read block ...passed 00:07:28.761 Test: blockdev write zeroes read no split ...passed 00:07:28.761 Test: blockdev write zeroes read split ...passed 00:07:28.761 Test: blockdev write zeroes read split partial ...passed 00:07:28.761 Test: blockdev reset ...[2024-11-28 00:01:43.136517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:28.761 [2024-11-28 00:01:43.139417] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.761 passed 00:07:28.761 Test: blockdev write read 8 blocks ...passed 00:07:28.761 Test: blockdev write read size > 128k ...passed 00:07:28.761 Test: blockdev write read invalid size ...passed 00:07:28.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:28.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:28.761 Test: blockdev write read max offset ...passed 00:07:28.761 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:28.761 Test: blockdev writev readv 8 blocks ...passed 00:07:28.761 Test: blockdev writev readv 30 x 1block ...passed 00:07:28.761 Test: blockdev writev readv block ...passed 00:07:28.761 Test: blockdev writev readv size > 128k ...passed 00:07:28.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:28.761 Test: blockdev comparev and writev ...[2024-11-28 00:01:43.154185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd00e000 len:0x1000 00:07:28.761 [2024-11-28 00:01:43.154233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme passthru rw ...passed 00:07:28.761 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.155722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:28.761 [2024-11-28 00:01:43.155753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme admin passthru ...passed 00:07:28.761 Test: blockdev copy ...passed 00:07:28.761 Suite: bdevio tests on: Nvme2n3 00:07:28.761 Test: blockdev write read block ...passed 00:07:28.761 Test: blockdev write zeroes read block ...passed 00:07:28.761 Test: blockdev write zeroes read no split ...passed 00:07:28.761 Test: blockdev write zeroes read split ...passed 00:07:28.761 Test: blockdev write zeroes read split partial ...passed 00:07:28.761 Test: blockdev reset ...[2024-11-28 00:01:43.175056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:28.761 [2024-11-28 00:01:43.177044] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.761 passed 00:07:28.761 Test: blockdev write read 8 blocks ...passed 00:07:28.761 Test: blockdev write read size > 128k ...passed 00:07:28.761 Test: blockdev write read invalid size ...passed 00:07:28.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:28.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:28.761 Test: blockdev write read max offset ...passed 00:07:28.761 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:28.761 Test: blockdev writev readv 8 blocks ...passed 00:07:28.761 Test: blockdev writev readv 30 x 1block ...passed 00:07:28.761 Test: blockdev writev readv block ...passed 00:07:28.761 Test: blockdev writev readv size > 128k ...passed 00:07:28.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:28.761 Test: blockdev comparev and writev ...[2024-11-28 00:01:43.190831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd008000 len:0x1000 00:07:28.761 [2024-11-28 00:01:43.190869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme passthru rw ...passed 00:07:28.761 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.192775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:28.761 [2024-11-28 00:01:43.192804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme admin passthru ...passed 00:07:28.761 Test: blockdev copy ...passed 00:07:28.761 Suite: bdevio tests on: Nvme2n2 00:07:28.761 Test: blockdev write read block ...passed 00:07:28.761 Test: blockdev write zeroes read block ...passed 00:07:28.761 Test: blockdev write zeroes read no split ...passed 00:07:28.761 Test: blockdev write zeroes read split ...passed 00:07:28.761 Test: blockdev write zeroes read split partial ...passed 00:07:28.761 Test: blockdev reset ...[2024-11-28 00:01:43.214080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:28.761 [2024-11-28 00:01:43.216598] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.761 passed 00:07:28.761 Test: blockdev write read 8 blocks ...passed 00:07:28.761 Test: blockdev write read size > 128k ...passed 00:07:28.761 Test: blockdev write read invalid size ...passed 00:07:28.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:28.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:28.761 Test: blockdev write read max offset ...passed 00:07:28.761 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:28.761 Test: blockdev writev readv 8 blocks ...passed 00:07:28.761 Test: blockdev writev readv 30 x 1block ...passed 00:07:28.761 Test: blockdev writev readv block ...passed 00:07:28.761 Test: blockdev writev readv size > 128k ...passed 00:07:28.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:28.761 Test: blockdev comparev and writev ...[2024-11-28 00:01:43.224580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd004000 len:0x1000 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme passthru rw ...[2024-11-28 00:01:43.224625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:28.761 passed 00:07:28.761 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.225186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:28.761 passed 00:07:28.762 Test: blockdev nvme admin passthru ...[2024-11-28 00:01:43.225209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:28.762 passed 00:07:28.762 Test: blockdev copy ...passed 00:07:28.762 Suite: bdevio tests on: Nvme2n1 00:07:28.762 Test: blockdev write read block ...passed 00:07:28.762 Test: blockdev write zeroes read block ...passed 00:07:28.762 Test: blockdev write zeroes read no split ...passed 00:07:28.762 Test: blockdev write zeroes read split ...passed 00:07:28.762 Test: blockdev write zeroes read split partial ...passed 00:07:28.762 Test: blockdev reset ...[2024-11-28 00:01:43.255751] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:28.762 [2024-11-28 00:01:43.257817] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.762 passed 00:07:28.762 Test: blockdev write read 8 blocks ...passed 00:07:28.762 Test: blockdev write read size > 128k ...passed 00:07:28.762 Test: blockdev write read invalid size ...passed 00:07:28.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:28.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:28.762 Test: blockdev write read max offset ...passed 00:07:28.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:28.762 Test: blockdev writev readv 8 blocks ...passed 00:07:28.762 Test: blockdev writev readv 30 x 1block ...passed 00:07:28.762 Test: blockdev writev readv block ...passed 00:07:28.762 Test: blockdev writev readv size > 128k ...passed 00:07:28.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:28.762 Test: blockdev comparev and writev ...[2024-11-28 00:01:43.272733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd004000 len:0x1000 00:07:28.762 [2024-11-28 00:01:43.272770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:28.762 passed 00:07:28.762 Test: blockdev nvme passthru rw ...passed 00:07:28.762 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.274617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:28.762 [2024-11-28 00:01:43.274638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:28.762 passed 00:07:28.762 Test: blockdev nvme admin passthru ...passed 00:07:28.762 Test: blockdev copy ...passed 00:07:28.762 Suite: bdevio tests on: Nvme1n1 00:07:28.762 Test: blockdev write read block ...passed 00:07:28.762 Test: blockdev write zeroes read block ...passed 00:07:28.762 Test: blockdev write zeroes read no split ...passed 00:07:28.762 Test: blockdev write zeroes read split ...passed 00:07:28.762 Test: blockdev write zeroes read split partial ...passed 00:07:28.762 Test: blockdev reset ...[2024-11-28 00:01:43.296676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:28.762 [2024-11-28 00:01:43.299777] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:28.762 passed 00:07:28.762 Test: blockdev write read 8 blocks ...passed 00:07:28.762 Test: blockdev write read size > 128k ...passed 00:07:28.762 Test: blockdev write read invalid size ...passed 00:07:28.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:28.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:28.762 Test: blockdev write read max offset ...passed 00:07:28.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:28.762 Test: blockdev writev readv 8 blocks ...passed 00:07:28.762 Test: blockdev writev readv 30 x 1block ...passed 00:07:28.762 Test: blockdev writev readv block ...passed 00:07:28.762 Test: blockdev writev readv size > 128k ...passed 00:07:28.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:28.762 Test: blockdev comparev and writev ...[2024-11-28 00:01:43.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c820e000 len:0x1000 00:07:28.762 [2024-11-28 00:01:43.318142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:28.762 passed 00:07:28.762 Test: blockdev nvme passthru rw ...passed 00:07:28.762 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.320596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:28.762 [2024-11-28 00:01:43.320628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:28.762 passed 00:07:28.762 Test: blockdev nvme admin passthru ...passed 00:07:28.762 Test: blockdev copy ...passed 00:07:28.762 Suite: bdevio tests on: Nvme0n1 00:07:28.762 Test: blockdev write read block ...passed 00:07:28.762 Test: blockdev write zeroes read block ...passed 00:07:28.762 Test: blockdev write zeroes read no split ...passed 00:07:29.048 Test: blockdev write zeroes read split ...passed 00:07:29.048 Test: blockdev write zeroes read split partial ...passed 00:07:29.048 Test: blockdev reset ...[2024-11-28 00:01:43.348625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:29.048 passed 00:07:29.048 Test: blockdev write read 8 blocks ...[2024-11-28 00:01:43.351352] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.048 passed 00:07:29.048 Test: blockdev write read size > 128k ...passed 00:07:29.048 Test: blockdev write read invalid size ...passed 00:07:29.048 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.048 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.048 Test: blockdev write read max offset ...passed 00:07:29.048 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.048 Test: blockdev writev readv 8 blocks ...passed 00:07:29.048 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.048 Test: blockdev writev readv block ...passed 00:07:29.048 Test: blockdev writev readv size > 128k ...passed 00:07:29.048 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.048 Test: blockdev comparev and writev ...passed 00:07:29.048 Test: blockdev nvme passthru rw ...[2024-11-28 00:01:43.362393] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:29.048 separate metadata which is not supported yet. 00:07:29.048 passed 00:07:29.048 Test: blockdev nvme passthru vendor specific ...[2024-11-28 00:01:43.362895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:29.048 passed 00:07:29.048 Test: blockdev nvme admin passthru ...[2024-11-28 00:01:43.362936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:29.048 passed 00:07:29.048 Test: blockdev copy ...passed 00:07:29.048 00:07:29.048 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.048 suites 6 6 n/a 0 0 00:07:29.048 tests 138 138 138 0 0 00:07:29.048 asserts 893 893 893 0 n/a 00:07:29.048 00:07:29.048 Elapsed time = 0.556 seconds 00:07:29.048 0 00:07:29.048 00:01:43 -- bdev/blockdev.sh@293 -- # killprocess 72063 00:07:29.048 00:01:43 -- common/autotest_common.sh@936 -- # '[' -z 72063 ']' 00:07:29.048 00:01:43 -- common/autotest_common.sh@940 -- # kill -0 72063 00:07:29.048 00:01:43 -- common/autotest_common.sh@941 -- # uname 00:07:29.048 00:01:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.048 00:01:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72063 00:07:29.048 killing process with pid 72063 00:07:29.048 00:01:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.048 00:01:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.048 00:01:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72063' 00:07:29.048 00:01:43 -- common/autotest_common.sh@955 -- # kill 72063 00:07:29.048 00:01:43 -- common/autotest_common.sh@960 -- # wait 72063 00:07:29.048 00:01:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:29.048 00:07:29.048 real 0m1.388s 00:07:29.048 user 0m3.446s 00:07:29.048 sys 0m0.251s 00:07:29.048 00:01:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.048 00:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.048 ************************************ 00:07:29.048 END TEST bdev_bounds 00:07:29.048 ************************************ 00:07:29.048 00:01:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:29.048 00:01:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:29.048 00:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.048 00:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.048 ************************************ 00:07:29.048 START TEST bdev_nbd 00:07:29.048 ************************************ 00:07:29.048 00:01:43 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:29.048 00:01:43 -- bdev/blockdev.sh@298 -- # uname -s 00:07:29.048 00:01:43 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:29.048 00:01:43 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.048 00:01:43 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.048 00:01:43 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.048 00:01:43 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:29.048 00:01:43 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:29.048 00:01:43 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:29.048 00:01:43 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:29.048 00:01:43 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:29.048 00:01:43 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:29.048 00:01:43 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:29.048 00:01:43 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:29.048 00:01:43 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.048 00:01:43 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:29.048 00:01:43 -- bdev/blockdev.sh@316 -- # nbd_pid=72117 00:07:29.048 00:01:43 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:29.048 00:01:43 -- bdev/blockdev.sh@318 -- # waitforlisten 72117 /var/tmp/spdk-nbd.sock 00:07:29.048 00:01:43 -- common/autotest_common.sh@829 -- # '[' -z 72117 ']' 00:07:29.048 00:01:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.048 00:01:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.048 00:01:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.048 00:01:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.048 00:01:43 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:29.048 00:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.307 [2024-11-28 00:01:43.672246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.307 [2024-11-28 00:01:43.672360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.307 [2024-11-28 00:01:43.821477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.307 [2024-11-28 00:01:43.853314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.240 00:01:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.240 00:01:44 -- common/autotest_common.sh@862 -- # return 0 00:07:30.240 00:01:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@24 -- # local i 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:30.240 00:01:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:30.240 00:01:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:30.240 00:01:44 -- common/autotest_common.sh@867 -- # local i 00:07:30.240 00:01:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:30.240 00:01:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:30.240 00:01:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:30.240 00:01:44 -- common/autotest_common.sh@871 -- # break 00:07:30.240 00:01:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:30.240 00:01:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:30.240 00:01:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.240 1+0 records in 00:07:30.240 1+0 records out 00:07:30.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738372 s, 5.5 MB/s 00:07:30.240 00:01:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.241 00:01:44 -- common/autotest_common.sh@884 -- # size=4096 00:07:30.241 00:01:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.241 00:01:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:30.241 00:01:44 -- common/autotest_common.sh@887 -- # return 0 00:07:30.241 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.241 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:30.241 00:01:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:30.499 00:01:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:30.499 00:01:44 -- common/autotest_common.sh@867 -- # local i 00:07:30.499 00:01:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:30.499 00:01:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:30.499 00:01:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:30.499 00:01:44 -- common/autotest_common.sh@871 -- # break 00:07:30.499 00:01:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:30.499 00:01:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:30.499 00:01:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.499 1+0 records in 00:07:30.499 1+0 records out 00:07:30.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113931 s, 3.6 MB/s 00:07:30.499 00:01:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.499 00:01:44 -- common/autotest_common.sh@884 -- # size=4096 00:07:30.499 00:01:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.499 00:01:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:30.499 00:01:44 -- common/autotest_common.sh@887 -- # return 0 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:30.499 00:01:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:30.756 00:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:30.756 00:01:45 -- common/autotest_common.sh@867 -- # local i 00:07:30.756 00:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:30.756 00:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:30.756 00:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:30.756 00:01:45 -- common/autotest_common.sh@871 -- # break 00:07:30.756 00:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:30.756 00:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:30.756 00:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.756 1+0 records in 00:07:30.756 1+0 records out 00:07:30.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573873 s, 7.1 MB/s 00:07:30.756 00:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.756 00:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:07:30.756 00:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.756 00:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:30.756 00:01:45 -- common/autotest_common.sh@887 -- # return 0 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:30.756 00:01:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:31.014 00:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:31.014 00:01:45 -- common/autotest_common.sh@867 -- # local i 00:07:31.014 00:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:31.014 00:01:45 -- common/autotest_common.sh@871 -- # break 00:07:31.014 00:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.014 1+0 records in 00:07:31.014 1+0 records out 00:07:31.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108615 s, 3.8 MB/s 00:07:31.014 00:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.014 00:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:07:31.014 00:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.014 00:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:31.014 00:01:45 -- common/autotest_common.sh@887 -- # return 0 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:31.014 00:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:31.014 00:01:45 -- common/autotest_common.sh@867 -- # local i 00:07:31.014 00:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:31.014 00:01:45 -- common/autotest_common.sh@871 -- # break 00:07:31.014 00:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:31.014 00:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.014 1+0 records in 00:07:31.014 1+0 records out 00:07:31.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748789 s, 5.5 MB/s 00:07:31.014 00:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.014 00:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:07:31.014 00:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.014 00:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:31.014 00:01:45 -- common/autotest_common.sh@887 -- # return 0 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:31.014 00:01:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:31.272 00:01:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:31.272 00:01:45 -- common/autotest_common.sh@867 -- # local i 00:07:31.272 00:01:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:31.272 00:01:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:31.272 00:01:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:31.272 00:01:45 -- common/autotest_common.sh@871 -- # break 00:07:31.272 00:01:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:31.272 00:01:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:31.272 00:01:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.272 1+0 records in 00:07:31.272 1+0 records out 00:07:31.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156022 s, 2.6 MB/s 00:07:31.272 00:01:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.272 00:01:45 -- common/autotest_common.sh@884 -- # size=4096 00:07:31.272 00:01:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.272 00:01:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:31.272 00:01:45 -- common/autotest_common.sh@887 -- # return 0 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:31.272 00:01:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd0", 00:07:31.531 "bdev_name": "Nvme0n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd1", 00:07:31.531 "bdev_name": "Nvme1n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd2", 00:07:31.531 "bdev_name": "Nvme2n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd3", 00:07:31.531 "bdev_name": "Nvme2n2" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd4", 00:07:31.531 "bdev_name": "Nvme2n3" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd5", 00:07:31.531 "bdev_name": "Nvme3n1" 00:07:31.531 } 00:07:31.531 ]' 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd0", 00:07:31.531 "bdev_name": "Nvme0n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd1", 00:07:31.531 "bdev_name": "Nvme1n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd2", 00:07:31.531 "bdev_name": "Nvme2n1" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd3", 00:07:31.531 "bdev_name": "Nvme2n2" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd4", 00:07:31.531 "bdev_name": "Nvme2n3" 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "nbd_device": "/dev/nbd5", 00:07:31.531 "bdev_name": "Nvme3n1" 00:07:31.531 } 00:07:31.531 ]' 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@51 -- # local i 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.531 00:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.789 00:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.790 00:01:46 -- bdev/nbd_common.sh@41 -- # break 00:07:31.790 00:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.790 00:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.790 00:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@41 -- # break 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:32.050 00:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@41 -- # break 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@41 -- # break 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.308 00:01:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.567 00:01:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@41 -- # break 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.826 00:01:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@65 -- # true 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@122 -- # count=0 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@127 -- # return 0 00:07:33.083 00:01:47 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.083 00:01:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@12 -- # local i 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:33.084 00:01:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:33.342 /dev/nbd0 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.342 00:01:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:33.342 00:01:47 -- common/autotest_common.sh@867 -- # local i 00:07:33.342 00:01:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:33.342 00:01:47 -- common/autotest_common.sh@871 -- # break 00:07:33.342 00:01:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.342 1+0 records in 00:07:33.342 1+0 records out 00:07:33.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742145 s, 5.5 MB/s 00:07:33.342 00:01:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.342 00:01:47 -- common/autotest_common.sh@884 -- # size=4096 00:07:33.342 00:01:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.342 00:01:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:33.342 00:01:47 -- common/autotest_common.sh@887 -- # return 0 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:33.342 /dev/nbd1 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.342 00:01:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:33.342 00:01:47 -- common/autotest_common.sh@867 -- # local i 00:07:33.342 00:01:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:33.342 00:01:47 -- common/autotest_common.sh@871 -- # break 00:07:33.342 00:01:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:33.342 00:01:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.342 1+0 records in 00:07:33.342 1+0 records out 00:07:33.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806798 s, 5.1 MB/s 00:07:33.342 00:01:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.342 00:01:47 -- common/autotest_common.sh@884 -- # size=4096 00:07:33.342 00:01:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.342 00:01:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:33.342 00:01:47 -- common/autotest_common.sh@887 -- # return 0 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:33.342 00:01:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:33.601 /dev/nbd10 00:07:33.601 00:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:33.601 00:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:33.601 00:01:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:33.601 00:01:48 -- common/autotest_common.sh@867 -- # local i 00:07:33.601 00:01:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:33.601 00:01:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:33.601 00:01:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:33.601 00:01:48 -- common/autotest_common.sh@871 -- # break 00:07:33.601 00:01:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:33.601 00:01:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:33.601 00:01:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.601 1+0 records in 00:07:33.601 1+0 records out 00:07:33.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776907 s, 5.3 MB/s 00:07:33.601 00:01:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.601 00:01:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:33.601 00:01:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.601 00:01:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:33.601 00:01:48 -- common/autotest_common.sh@887 -- # return 0 00:07:33.601 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.601 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:33.601 00:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:33.859 /dev/nbd11 00:07:33.859 00:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:33.859 00:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:33.859 00:01:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:33.859 00:01:48 -- common/autotest_common.sh@867 -- # local i 00:07:33.859 00:01:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:33.859 00:01:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:33.859 00:01:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:33.859 00:01:48 -- common/autotest_common.sh@871 -- # break 00:07:33.859 00:01:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:33.859 00:01:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:33.859 00:01:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.859 1+0 records in 00:07:33.859 1+0 records out 00:07:33.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969494 s, 4.2 MB/s 00:07:33.859 00:01:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.859 00:01:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:33.859 00:01:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.859 00:01:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:33.859 00:01:48 -- common/autotest_common.sh@887 -- # return 0 00:07:33.859 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.859 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:33.859 00:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:34.117 /dev/nbd12 00:07:34.117 00:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:34.118 00:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:34.118 00:01:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:34.118 00:01:48 -- common/autotest_common.sh@867 -- # local i 00:07:34.118 00:01:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.118 00:01:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.118 00:01:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:34.118 00:01:48 -- common/autotest_common.sh@871 -- # break 00:07:34.118 00:01:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.118 00:01:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.118 00:01:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.118 1+0 records in 00:07:34.118 1+0 records out 00:07:34.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834701 s, 4.9 MB/s 00:07:34.118 00:01:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.118 00:01:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:34.118 00:01:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.118 00:01:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.118 00:01:48 -- common/autotest_common.sh@887 -- # return 0 00:07:34.118 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.118 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:34.118 00:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:34.375 /dev/nbd13 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:34.375 00:01:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:34.375 00:01:48 -- common/autotest_common.sh@867 -- # local i 00:07:34.375 00:01:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.375 00:01:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.375 00:01:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:34.375 00:01:48 -- common/autotest_common.sh@871 -- # break 00:07:34.375 00:01:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.375 00:01:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.375 00:01:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.375 1+0 records in 00:07:34.375 1+0 records out 00:07:34.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111154 s, 3.7 MB/s 00:07:34.375 00:01:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.375 00:01:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:34.375 00:01:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.375 00:01:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.375 00:01:48 -- common/autotest_common.sh@887 -- # return 0 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.375 00:01:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd0", 00:07:34.633 "bdev_name": "Nvme0n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd1", 00:07:34.633 "bdev_name": "Nvme1n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd10", 00:07:34.633 "bdev_name": "Nvme2n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd11", 00:07:34.633 "bdev_name": "Nvme2n2" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd12", 00:07:34.633 "bdev_name": "Nvme2n3" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd13", 00:07:34.633 "bdev_name": "Nvme3n1" 00:07:34.633 } 00:07:34.633 ]' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd0", 00:07:34.633 "bdev_name": "Nvme0n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd1", 00:07:34.633 "bdev_name": "Nvme1n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd10", 00:07:34.633 "bdev_name": "Nvme2n1" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd11", 00:07:34.633 "bdev_name": "Nvme2n2" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd12", 00:07:34.633 "bdev_name": "Nvme2n3" 00:07:34.633 }, 00:07:34.633 { 00:07:34.633 "nbd_device": "/dev/nbd13", 00:07:34.633 "bdev_name": "Nvme3n1" 00:07:34.633 } 00:07:34.633 ]' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.633 /dev/nbd1 00:07:34.633 /dev/nbd10 00:07:34.633 /dev/nbd11 00:07:34.633 /dev/nbd12 00:07:34.633 /dev/nbd13' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.633 /dev/nbd1 00:07:34.633 /dev/nbd10 00:07:34.633 /dev/nbd11 00:07:34.633 /dev/nbd12 00:07:34.633 /dev/nbd13' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@65 -- # count=6 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@95 -- # count=6 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:34.633 256+0 records in 00:07:34.633 256+0 records out 00:07:34.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501252 s, 209 MB/s 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.633 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.892 256+0 records in 00:07:34.892 256+0 records out 00:07:34.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194301 s, 5.4 MB/s 00:07:34.892 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.892 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.892 256+0 records in 00:07:34.892 256+0 records out 00:07:34.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148405 s, 7.1 MB/s 00:07:34.892 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.892 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:35.151 256+0 records in 00:07:35.151 256+0 records out 00:07:35.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167463 s, 6.3 MB/s 00:07:35.151 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.151 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:35.151 256+0 records in 00:07:35.151 256+0 records out 00:07:35.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15026 s, 7.0 MB/s 00:07:35.151 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.151 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:35.408 256+0 records in 00:07:35.408 256+0 records out 00:07:35.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209265 s, 5.0 MB/s 00:07:35.408 00:01:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.408 00:01:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:35.666 256+0 records in 00:07:35.666 256+0 records out 00:07:35.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1188 s, 8.8 MB/s 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@51 -- # local i 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.666 00:01:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@41 -- # break 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.924 00:01:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@41 -- # break 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@41 -- # break 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.183 00:01:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@41 -- # break 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.443 00:01:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:36.701 00:01:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:36.701 00:01:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:36.701 00:01:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@41 -- # break 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.702 00:01:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@41 -- # break 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.960 00:01:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@65 -- # true 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@65 -- # count=0 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@104 -- # count=0 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@109 -- # return 0 00:07:37.219 00:01:51 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:37.219 malloc_lvol_verify 00:07:37.219 00:01:51 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:37.478 25e6285a-80cd-4316-9307-05e1c86cb263 00:07:37.478 00:01:51 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:37.736 843f9514-eaff-4e46-b515-b350e15400ff 00:07:37.736 00:01:52 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:37.995 /dev/nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:37.995 mke2fs 1.47.0 (5-Feb-2023) 00:07:37.995 Discarding device blocks: 0/4096 done 00:07:37.995 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:37.995 00:07:37.995 Allocating group tables: 0/1 done 00:07:37.995 Writing inode tables: 0/1 done 00:07:37.995 Creating journal (1024 blocks): done 00:07:37.995 Writing superblocks and filesystem accounting information: 0/1 done 00:07:37.995 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@51 -- # local i 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@41 -- # break 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.995 00:01:52 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:37.996 00:01:52 -- bdev/nbd_common.sh@147 -- # return 0 00:07:37.996 00:01:52 -- bdev/blockdev.sh@324 -- # killprocess 72117 00:07:37.996 00:01:52 -- common/autotest_common.sh@936 -- # '[' -z 72117 ']' 00:07:37.996 00:01:52 -- common/autotest_common.sh@940 -- # kill -0 72117 00:07:37.996 00:01:52 -- common/autotest_common.sh@941 -- # uname 00:07:38.288 00:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.288 00:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72117 00:07:38.288 killing process with pid 72117 00:07:38.288 00:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:38.288 00:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:38.288 00:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72117' 00:07:38.288 00:01:52 -- common/autotest_common.sh@955 -- # kill 72117 00:07:38.288 00:01:52 -- common/autotest_common.sh@960 -- # wait 72117 00:07:38.288 00:01:52 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:38.288 00:07:38.288 real 0m9.190s 00:07:38.288 user 0m12.851s 00:07:38.288 sys 0m3.163s 00:07:38.288 00:01:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.288 00:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:38.288 ************************************ 00:07:38.288 END TEST bdev_nbd 00:07:38.288 ************************************ 00:07:38.288 00:01:52 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:38.288 00:01:52 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:38.288 skipping fio tests on NVMe due to multi-ns failures. 00:07:38.288 00:01:52 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:38.288 00:01:52 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:38.288 00:01:52 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.288 00:01:52 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:38.288 00:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.288 00:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:38.570 ************************************ 00:07:38.570 START TEST bdev_verify 00:07:38.570 ************************************ 00:07:38.570 00:01:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.570 [2024-11-28 00:01:52.927360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.570 [2024-11-28 00:01:52.927494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72488 ] 00:07:38.570 [2024-11-28 00:01:53.078134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.570 [2024-11-28 00:01:53.111590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.570 [2024-11-28 00:01:53.111685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.136 Running I/O for 5 seconds... 00:07:44.398 00:07:44.398 Latency(us) 00:07:44.398 [2024-11-28T00:01:59.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0xbd0bd 00:07:44.398 Nvme0n1 : 5.04 2857.72 11.16 0.00 0.00 44653.27 7461.02 55251.89 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:44.398 Nvme0n1 : 5.05 2850.38 11.13 0.00 0.00 44798.23 5545.35 59284.87 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0xa0000 00:07:44.398 Nvme1n1 : 5.05 2857.05 11.16 0.00 0.00 44639.24 7410.61 53235.40 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0xa0000 length 0xa0000 00:07:44.398 Nvme1n1 : 5.05 2848.62 11.13 0.00 0.00 44751.21 7158.55 58074.98 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0x80000 00:07:44.398 Nvme2n1 : 5.05 2862.51 11.18 0.00 0.00 44480.18 2255.95 51017.26 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x80000 length 0x80000 00:07:44.398 Nvme2n1 : 5.05 2847.66 11.12 0.00 0.00 44694.37 7309.78 56865.08 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0x80000 00:07:44.398 Nvme2n2 : 5.05 2861.38 11.18 0.00 0.00 44429.42 3680.10 50009.01 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x80000 length 0x80000 00:07:44.398 Nvme2n2 : 5.05 2852.22 11.14 0.00 0.00 44610.47 1556.48 57268.38 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0x80000 00:07:44.398 Nvme2n3 : 5.05 2860.55 11.17 0.00 0.00 44396.04 4411.08 50613.96 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x80000 length 0x80000 00:07:44.398 Nvme2n3 : 5.06 2850.74 11.14 0.00 0.00 44575.88 3579.27 60091.47 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x0 length 0x20000 00:07:44.398 Nvme3n1 : 5.06 2858.88 11.17 0.00 0.00 44369.34 6604.01 49000.76 00:07:44.398 [2024-11-28T00:01:59.000Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.398 Verification LBA range: start 0x20000 length 0x20000 00:07:44.398 Nvme3n1 : 5.06 2849.18 11.13 0.00 0.00 44542.64 5696.59 60091.47 00:07:44.398 [2024-11-28T00:01:59.000Z] =================================================================================================================== 00:07:44.398 [2024-11-28T00:01:59.000Z] Total : 34256.89 133.82 0.00 0.00 44578.10 1556.48 60091.47 00:08:10.930 00:08:10.930 real 0m32.201s 00:08:10.930 user 1m3.460s 00:08:10.930 sys 0m0.327s 00:08:10.930 00:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.930 00:02:25 -- common/autotest_common.sh@10 -- # set +x 00:08:10.930 ************************************ 00:08:10.930 END TEST bdev_verify 00:08:10.930 ************************************ 00:08:10.930 00:02:25 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.930 00:02:25 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:10.930 00:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.930 00:02:25 -- common/autotest_common.sh@10 -- # set +x 00:08:10.930 ************************************ 00:08:10.930 START TEST bdev_verify_big_io 00:08:10.930 ************************************ 00:08:10.930 00:02:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.930 [2024-11-28 00:02:25.200610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.930 [2024-11-28 00:02:25.200726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72854 ] 00:08:10.930 [2024-11-28 00:02:25.346753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.930 [2024-11-28 00:02:25.378604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.930 [2024-11-28 00:02:25.378669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.186 Running I/O for 5 seconds... 00:08:17.754 00:08:17.754 Latency(us) 00:08:17.754 [2024-11-28T00:02:32.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0xbd0b 00:08:17.754 Nvme0n1 : 5.33 268.26 16.77 0.00 0.00 464843.22 81869.59 787238.60 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:17.754 Nvme0n1 : 5.43 230.80 14.42 0.00 0.00 539495.22 90742.15 738842.78 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0xa000 00:08:17.754 Nvme1n1 : 5.33 268.18 16.76 0.00 0.00 456657.10 82676.18 722710.84 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0xa000 length 0xa000 00:08:17.754 Nvme1n1 : 5.43 230.72 14.42 0.00 0.00 531352.32 91145.45 683994.19 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0x8000 00:08:17.754 Nvme2n1 : 5.40 282.20 17.64 0.00 0.00 430992.09 33473.77 603334.50 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x8000 length 0x8000 00:08:17.754 Nvme2n1 : 5.43 230.56 14.41 0.00 0.00 522946.78 93968.54 629145.60 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0x8000 00:08:17.754 Nvme2n2 : 5.43 288.55 18.03 0.00 0.00 414315.88 27625.94 577523.40 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x8000 length 0x8000 00:08:17.754 Nvme2n2 : 5.46 246.86 15.43 0.00 0.00 489104.29 5520.15 567844.23 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0x8000 00:08:17.754 Nvme2n3 : 5.45 295.30 18.46 0.00 0.00 398196.86 17543.48 500090.09 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x8000 length 0x8000 00:08:17.754 Nvme2n3 : 5.47 246.76 15.42 0.00 0.00 481248.82 6351.95 503316.48 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x0 length 0x2000 00:08:17.754 Nvme3n1 : 5.49 325.68 20.36 0.00 0.00 356347.54 926.33 467826.22 00:08:17.754 [2024-11-28T00:02:32.356Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:17.754 Verification LBA range: start 0x2000 length 0x2000 00:08:17.754 Nvme3n1 : 5.47 254.25 15.89 0.00 0.00 459923.72 1512.37 474278.99 00:08:17.754 [2024-11-28T00:02:32.356Z] =================================================================================================================== 00:08:17.754 [2024-11-28T00:02:32.356Z] Total : 3168.11 198.01 0.00 0.00 456355.25 926.33 787238.60 00:08:18.015 00:08:18.015 real 0m7.262s 00:08:18.015 user 0m13.810s 00:08:18.015 sys 0m0.213s 00:08:18.015 00:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.015 00:02:32 -- common/autotest_common.sh@10 -- # set +x 00:08:18.015 ************************************ 00:08:18.015 END TEST bdev_verify_big_io 00:08:18.015 ************************************ 00:08:18.015 00:02:32 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.015 00:02:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:18.015 00:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.015 00:02:32 -- common/autotest_common.sh@10 -- # set +x 00:08:18.015 ************************************ 00:08:18.015 START TEST bdev_write_zeroes 00:08:18.015 ************************************ 00:08:18.015 00:02:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.015 [2024-11-28 00:02:32.493323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.015 [2024-11-28 00:02:32.493448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:08:18.277 [2024-11-28 00:02:32.640905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.277 [2024-11-28 00:02:32.670985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.538 Running I/O for 1 seconds... 00:08:19.488 00:08:19.488 Latency(us) 00:08:19.488 [2024-11-28T00:02:34.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme0n1 : 1.01 12138.06 47.41 0.00 0.00 10520.41 5217.67 18350.08 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme1n1 : 1.01 12119.86 47.34 0.00 0.00 10522.15 8368.44 18350.08 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme2n1 : 1.02 12105.40 47.29 0.00 0.00 10506.38 7965.14 18753.38 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme2n2 : 1.02 12091.63 47.23 0.00 0.00 10495.33 8116.38 18350.08 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme2n3 : 1.02 12078.00 47.18 0.00 0.00 10479.84 6452.78 19559.98 00:08:19.488 [2024-11-28T00:02:34.090Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:19.488 Nvme3n1 : 1.02 12122.44 47.35 0.00 0.00 10454.84 6956.90 18047.61 00:08:19.488 [2024-11-28T00:02:34.090Z] =================================================================================================================== 00:08:19.488 [2024-11-28T00:02:34.090Z] Total : 72655.39 283.81 0.00 0.00 10496.46 5217.67 19559.98 00:08:19.747 00:08:19.747 real 0m1.798s 00:08:19.747 user 0m1.519s 00:08:19.747 sys 0m0.160s 00:08:19.747 00:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.747 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.747 ************************************ 00:08:19.747 END TEST bdev_write_zeroes 00:08:19.747 ************************************ 00:08:19.747 00:02:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.747 00:02:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:19.747 00:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.747 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:19.747 ************************************ 00:08:19.747 START TEST bdev_json_nonenclosed 00:08:19.747 ************************************ 00:08:19.747 00:02:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.747 [2024-11-28 00:02:34.325869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.747 [2024-11-28 00:02:34.325994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72989 ] 00:08:20.007 [2024-11-28 00:02:34.474338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.007 [2024-11-28 00:02:34.507319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.007 [2024-11-28 00:02:34.507497] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:20.007 [2024-11-28 00:02:34.507517] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.007 00:08:20.007 real 0m0.312s 00:08:20.007 user 0m0.115s 00:08:20.007 sys 0m0.094s 00:08:20.007 00:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.007 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.007 ************************************ 00:08:20.007 END TEST bdev_json_nonenclosed 00:08:20.007 ************************************ 00:08:20.265 00:02:34 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.265 00:02:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:20.265 00:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.265 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.265 ************************************ 00:08:20.265 START TEST bdev_json_nonarray 00:08:20.265 ************************************ 00:08:20.265 00:02:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.265 [2024-11-28 00:02:34.679119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.265 [2024-11-28 00:02:34.679239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73020 ] 00:08:20.265 [2024-11-28 00:02:34.828428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.265 [2024-11-28 00:02:34.861441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.265 [2024-11-28 00:02:34.861633] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:20.265 [2024-11-28 00:02:34.861660] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.525 00:08:20.525 real 0m0.324s 00:08:20.525 user 0m0.134s 00:08:20.525 sys 0m0.087s 00:08:20.525 ************************************ 00:08:20.525 END TEST bdev_json_nonarray 00:08:20.525 ************************************ 00:08:20.525 00:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.525 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.525 00:02:34 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:20.525 00:02:34 -- bdev/blockdev.sh@809 -- # cleanup 00:08:20.525 00:02:34 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:20.525 00:02:34 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:20.525 00:02:34 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:20.525 00:02:34 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:20.525 00:08:20.525 real 0m55.412s 00:08:20.525 user 1m37.754s 00:08:20.525 sys 0m5.072s 00:08:20.525 ************************************ 00:08:20.525 END TEST blockdev_nvme 00:08:20.525 ************************************ 00:08:20.525 00:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.525 00:02:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.525 00:02:35 -- spdk/autotest.sh@206 -- # uname -s 00:08:20.525 00:02:35 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:20.525 00:02:35 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:20.525 00:02:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.525 00:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.525 00:02:35 -- common/autotest_common.sh@10 -- # set +x 00:08:20.525 ************************************ 00:08:20.525 START TEST blockdev_nvme_gpt 00:08:20.525 ************************************ 00:08:20.525 00:02:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:20.525 * Looking for test storage... 00:08:20.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:20.525 00:02:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.525 00:02:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.525 00:02:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.782 00:02:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.782 00:02:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.782 00:02:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.782 00:02:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.782 00:02:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.782 00:02:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.782 00:02:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.782 00:02:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.782 00:02:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.782 00:02:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.782 00:02:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.782 00:02:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.782 00:02:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.782 00:02:35 -- scripts/common.sh@344 -- # : 1 00:08:20.782 00:02:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.782 00:02:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.782 00:02:35 -- scripts/common.sh@364 -- # decimal 1 00:08:20.782 00:02:35 -- scripts/common.sh@352 -- # local d=1 00:08:20.782 00:02:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.782 00:02:35 -- scripts/common.sh@354 -- # echo 1 00:08:20.782 00:02:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.782 00:02:35 -- scripts/common.sh@365 -- # decimal 2 00:08:20.782 00:02:35 -- scripts/common.sh@352 -- # local d=2 00:08:20.782 00:02:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.783 00:02:35 -- scripts/common.sh@354 -- # echo 2 00:08:20.783 00:02:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.783 00:02:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.783 00:02:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.783 00:02:35 -- scripts/common.sh@367 -- # return 0 00:08:20.783 00:02:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.783 00:02:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.783 --rc genhtml_branch_coverage=1 00:08:20.783 --rc genhtml_function_coverage=1 00:08:20.783 --rc genhtml_legend=1 00:08:20.783 --rc geninfo_all_blocks=1 00:08:20.783 --rc geninfo_unexecuted_blocks=1 00:08:20.783 00:08:20.783 ' 00:08:20.783 00:02:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.783 --rc genhtml_branch_coverage=1 00:08:20.783 --rc genhtml_function_coverage=1 00:08:20.783 --rc genhtml_legend=1 00:08:20.783 --rc geninfo_all_blocks=1 00:08:20.783 --rc geninfo_unexecuted_blocks=1 00:08:20.783 00:08:20.783 ' 00:08:20.783 00:02:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.783 --rc genhtml_branch_coverage=1 00:08:20.783 --rc genhtml_function_coverage=1 00:08:20.783 --rc genhtml_legend=1 00:08:20.783 --rc geninfo_all_blocks=1 00:08:20.783 --rc geninfo_unexecuted_blocks=1 00:08:20.783 00:08:20.783 ' 00:08:20.783 00:02:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.783 --rc genhtml_branch_coverage=1 00:08:20.783 --rc genhtml_function_coverage=1 00:08:20.783 --rc genhtml_legend=1 00:08:20.783 --rc geninfo_all_blocks=1 00:08:20.783 --rc geninfo_unexecuted_blocks=1 00:08:20.783 00:08:20.783 ' 00:08:20.783 00:02:35 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:20.783 00:02:35 -- bdev/nbd_common.sh@6 -- # set -e 00:08:20.783 00:02:35 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:20.783 00:02:35 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:20.783 00:02:35 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:20.783 00:02:35 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:20.783 00:02:35 -- bdev/blockdev.sh@18 -- # : 00:08:20.783 00:02:35 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:20.783 00:02:35 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:20.783 00:02:35 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:20.783 00:02:35 -- bdev/blockdev.sh@672 -- # uname -s 00:08:20.783 00:02:35 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:20.783 00:02:35 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:20.783 00:02:35 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:20.783 00:02:35 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:20.783 00:02:35 -- bdev/blockdev.sh@682 -- # dek= 00:08:20.783 00:02:35 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:20.783 00:02:35 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:20.783 00:02:35 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:20.783 00:02:35 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:20.783 00:02:35 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:20.783 00:02:35 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:20.783 00:02:35 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=73092 00:08:20.783 00:02:35 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:20.783 00:02:35 -- bdev/blockdev.sh@47 -- # waitforlisten 73092 00:08:20.783 00:02:35 -- common/autotest_common.sh@829 -- # '[' -z 73092 ']' 00:08:20.783 00:02:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.783 00:02:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.783 00:02:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.783 00:02:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.783 00:02:35 -- common/autotest_common.sh@10 -- # set +x 00:08:20.783 00:02:35 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:20.783 [2024-11-28 00:02:35.237096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.783 [2024-11-28 00:02:35.237215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73092 ] 00:08:21.041 [2024-11-28 00:02:35.385179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.041 [2024-11-28 00:02:35.418119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.041 [2024-11-28 00:02:35.418292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.610 00:02:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.610 00:02:36 -- common/autotest_common.sh@862 -- # return 0 00:08:21.610 00:02:36 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:21.610 00:02:36 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:21.610 00:02:36 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:21.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:22.133 Waiting for block devices as requested 00:08:22.133 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.133 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.133 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.394 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.685 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:27.685 00:02:41 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:27.685 00:02:41 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:27.685 00:02:41 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:27.685 00:02:41 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:27.685 00:02:41 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:27.685 00:02:41 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:27.685 00:02:41 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:27.685 00:02:41 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:27.685 00:02:41 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:27.685 00:02:41 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:27.685 00:02:41 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:27.685 00:02:41 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:27.685 00:02:41 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:27.685 00:02:41 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:27.685 00:02:41 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:27.685 BYT; 00:08:27.685 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:27.685 00:02:41 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:27.685 BYT; 00:08:27.685 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:27.685 00:02:41 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:27.685 00:02:41 -- bdev/blockdev.sh@114 -- # break 00:08:27.686 00:02:41 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:27.686 00:02:41 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:27.686 00:02:41 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:27.686 00:02:41 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:27.686 00:02:41 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:27.686 00:02:41 -- scripts/common.sh@410 -- # local spdk_guid 00:08:27.686 00:02:41 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:27.686 00:02:41 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:27.686 00:02:41 -- scripts/common.sh@415 -- # IFS='()' 00:08:27.686 00:02:41 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:27.686 00:02:41 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:27.686 00:02:41 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:27.686 00:02:41 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:27.686 00:02:41 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:27.686 00:02:41 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:27.686 00:02:41 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:27.686 00:02:41 -- scripts/common.sh@422 -- # local spdk_guid 00:08:27.686 00:02:41 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:27.686 00:02:41 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:27.686 00:02:41 -- scripts/common.sh@427 -- # IFS='()' 00:08:27.686 00:02:41 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:27.686 00:02:41 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:27.686 00:02:41 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:27.686 00:02:41 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:27.686 00:02:41 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:27.686 00:02:41 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:27.686 00:02:41 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:28.630 The operation has completed successfully. 00:08:28.630 00:02:42 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:29.570 The operation has completed successfully. 00:08:29.570 00:02:43 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:30.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.395 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.395 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.395 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.395 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.395 00:02:44 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:30.395 00:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.395 00:02:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.395 [] 00:08:30.395 00:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.395 00:02:44 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:30.395 00:02:44 -- bdev/blockdev.sh@79 -- # local json 00:08:30.395 00:02:44 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:30.395 00:02:44 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:30.654 00:02:45 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:30.654 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.654 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:30.912 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@738 -- # cat 00:08:30.912 00:02:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:30.912 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:30.912 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:30.912 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:30.912 00:02:45 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:30.912 00:02:45 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:30.912 00:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.912 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:30.912 00:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.912 00:02:45 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:30.912 00:02:45 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:30.913 00:02:45 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "550fdea1-962e-45d7-9b33-79ebab78232e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "550fdea1-962e-45d7-9b33-79ebab78232e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9c476600-aee3-470e-b336-18dce4525e99"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c476600-aee3-470e-b336-18dce4525e99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "40782a1b-32c0-4e59-80b3-79f5b5a25476"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "40782a1b-32c0-4e59-80b3-79f5b5a25476",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9e153062-2091-4235-b438-300d5e06c800"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e153062-2091-4235-b438-300d5e06c800",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "82cc37ca-ef10-4e67-98e5-b4b6517c6e3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "82cc37ca-ef10-4e67-98e5-b4b6517c6e3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:30.913 00:02:45 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:30.913 00:02:45 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:30.913 00:02:45 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:30.913 00:02:45 -- bdev/blockdev.sh@752 -- # killprocess 73092 00:08:30.913 00:02:45 -- common/autotest_common.sh@936 -- # '[' -z 73092 ']' 00:08:30.913 00:02:45 -- common/autotest_common.sh@940 -- # kill -0 73092 00:08:30.913 00:02:45 -- common/autotest_common.sh@941 -- # uname 00:08:30.913 00:02:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.913 00:02:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73092 00:08:30.913 00:02:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:30.913 00:02:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:30.913 00:02:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73092' 00:08:30.913 killing process with pid 73092 00:08:30.913 00:02:45 -- common/autotest_common.sh@955 -- # kill 73092 00:08:30.913 00:02:45 -- common/autotest_common.sh@960 -- # wait 73092 00:08:31.173 00:02:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:31.174 00:02:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:31.174 00:02:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:31.174 00:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.174 00:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 ************************************ 00:08:31.174 START TEST bdev_hello_world 00:08:31.174 ************************************ 00:08:31.174 00:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:31.174 [2024-11-28 00:02:45.738960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.174 [2024-11-28 00:02:45.739059] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73730 ] 00:08:31.435 [2024-11-28 00:02:45.877169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.435 [2024-11-28 00:02:45.906848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.697 [2024-11-28 00:02:46.249851] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:31.697 [2024-11-28 00:02:46.249895] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:31.697 [2024-11-28 00:02:46.249915] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:31.697 [2024-11-28 00:02:46.251868] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:31.697 [2024-11-28 00:02:46.252326] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:31.697 [2024-11-28 00:02:46.252353] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:31.697 [2024-11-28 00:02:46.252617] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:31.697 00:08:31.697 [2024-11-28 00:02:46.252646] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:31.959 00:08:31.959 real 0m0.720s 00:08:31.959 user 0m0.476s 00:08:31.959 sys 0m0.141s 00:08:31.959 00:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.959 00:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.959 ************************************ 00:08:31.959 END TEST bdev_hello_world 00:08:31.959 ************************************ 00:08:31.959 00:02:46 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:31.959 00:02:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.959 00:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.959 00:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.959 ************************************ 00:08:31.959 START TEST bdev_bounds 00:08:31.959 ************************************ 00:08:31.959 00:02:46 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:31.959 00:02:46 -- bdev/blockdev.sh@288 -- # bdevio_pid=73755 00:08:31.959 00:02:46 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.959 Process bdevio pid: 73755 00:08:31.959 00:02:46 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 73755' 00:08:31.959 00:02:46 -- bdev/blockdev.sh@291 -- # waitforlisten 73755 00:08:31.959 00:02:46 -- common/autotest_common.sh@829 -- # '[' -z 73755 ']' 00:08:31.959 00:02:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.959 00:02:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.959 00:02:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.959 00:02:46 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:31.959 00:02:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.959 00:02:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.959 [2024-11-28 00:02:46.503836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.959 [2024-11-28 00:02:46.503950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73755 ] 00:08:32.221 [2024-11-28 00:02:46.650041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.221 [2024-11-28 00:02:46.681115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.221 [2024-11-28 00:02:46.681249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.221 [2024-11-28 00:02:46.681333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.794 00:02:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.794 00:02:47 -- common/autotest_common.sh@862 -- # return 0 00:08:32.794 00:02:47 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:33.055 I/O targets: 00:08:33.055 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:33.055 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:33.055 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:33.055 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.055 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.055 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:33.055 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:33.055 00:08:33.055 00:08:33.055 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.055 http://cunit.sourceforge.net/ 00:08:33.055 00:08:33.055 00:08:33.055 Suite: bdevio tests on: Nvme3n1 00:08:33.055 Test: blockdev write read block ...passed 00:08:33.055 Test: blockdev write zeroes read block ...passed 00:08:33.055 Test: blockdev write zeroes read no split ...passed 00:08:33.055 Test: blockdev write zeroes read split ...passed 00:08:33.055 Test: blockdev write zeroes read split partial ...passed 00:08:33.055 Test: blockdev reset ...[2024-11-28 00:02:47.481272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:33.055 [2024-11-28 00:02:47.482867] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.055 passed 00:08:33.055 Test: blockdev write read 8 blocks ...passed 00:08:33.055 Test: blockdev write read size > 128k ...passed 00:08:33.055 Test: blockdev write read invalid size ...passed 00:08:33.055 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.055 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.055 Test: blockdev write read max offset ...passed 00:08:33.055 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.055 Test: blockdev writev readv 8 blocks ...passed 00:08:33.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.056 Test: blockdev writev readv block ...passed 00:08:33.056 Test: blockdev writev readv size > 128k ...passed 00:08:33.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.056 Test: blockdev comparev and writev ...[2024-11-28 00:02:47.488280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9604000 len:0x1000 00:08:33.056 [2024-11-28 00:02:47.488332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev nvme passthru rw ...passed 00:08:33.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.056 Test: blockdev nvme admin passthru ...[2024-11-28 00:02:47.488888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.056 [2024-11-28 00:02:47.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev copy ...passed 00:08:33.056 Suite: bdevio tests on: Nvme2n3 00:08:33.056 Test: blockdev write read block ...passed 00:08:33.056 Test: blockdev write zeroes read block ...passed 00:08:33.056 Test: blockdev write zeroes read no split ...passed 00:08:33.056 Test: blockdev write zeroes read split ...passed 00:08:33.056 Test: blockdev write zeroes read split partial ...passed 00:08:33.056 Test: blockdev reset ...[2024-11-28 00:02:47.588509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:33.056 [2024-11-28 00:02:47.591966] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.056 passed 00:08:33.056 Test: blockdev write read 8 blocks ...passed 00:08:33.056 Test: blockdev write read size > 128k ...passed 00:08:33.056 Test: blockdev write read invalid size ...passed 00:08:33.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.056 Test: blockdev write read max offset ...passed 00:08:33.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.056 Test: blockdev writev readv 8 blocks ...passed 00:08:33.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.056 Test: blockdev writev readv block ...passed 00:08:33.056 Test: blockdev writev readv size > 128k ...passed 00:08:33.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.056 Test: blockdev comparev and writev ...[2024-11-28 00:02:47.600927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9604000 len:0x1000 00:08:33.056 [2024-11-28 00:02:47.600966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev nvme passthru rw ...passed 00:08:33.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.056 Test: blockdev nvme admin passthru ...[2024-11-28 00:02:47.601804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.056 [2024-11-28 00:02:47.601834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev copy ...passed 00:08:33.056 Suite: bdevio tests on: Nvme2n2 00:08:33.056 Test: blockdev write read block ...passed 00:08:33.056 Test: blockdev write zeroes read block ...passed 00:08:33.056 Test: blockdev write zeroes read no split ...passed 00:08:33.056 Test: blockdev write zeroes read split ...passed 00:08:33.056 Test: blockdev write zeroes read split partial ...passed 00:08:33.056 Test: blockdev reset ...[2024-11-28 00:02:47.622628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:33.056 passed 00:08:33.056 Test: blockdev write read 8 blocks ...[2024-11-28 00:02:47.624300] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.056 passed 00:08:33.056 Test: blockdev write read size > 128k ...passed 00:08:33.056 Test: blockdev write read invalid size ...passed 00:08:33.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.056 Test: blockdev write read max offset ...passed 00:08:33.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.056 Test: blockdev writev readv 8 blocks ...passed 00:08:33.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.056 Test: blockdev writev readv block ...passed 00:08:33.056 Test: blockdev writev readv size > 128k ...passed 00:08:33.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.056 Test: blockdev comparev and writev ...[2024-11-28 00:02:47.629548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb222000 len:0x1000 00:08:33.056 [2024-11-28 00:02:47.629584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev nvme passthru rw ...passed 00:08:33.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.056 Test: blockdev nvme admin passthru ...[2024-11-28 00:02:47.630126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.056 [2024-11-28 00:02:47.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev copy ...passed 00:08:33.056 Suite: bdevio tests on: Nvme2n1 00:08:33.056 Test: blockdev write read block ...passed 00:08:33.056 Test: blockdev write zeroes read block ...passed 00:08:33.056 Test: blockdev write zeroes read no split ...passed 00:08:33.056 Test: blockdev write zeroes read split ...passed 00:08:33.056 Test: blockdev write zeroes read split partial ...passed 00:08:33.056 Test: blockdev reset ...[2024-11-28 00:02:47.644063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:33.056 [2024-11-28 00:02:47.645570] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.056 passed 00:08:33.056 Test: blockdev write read 8 blocks ...passed 00:08:33.056 Test: blockdev write read size > 128k ...passed 00:08:33.056 Test: blockdev write read invalid size ...passed 00:08:33.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.056 Test: blockdev write read max offset ...passed 00:08:33.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.056 Test: blockdev writev readv 8 blocks ...passed 00:08:33.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.056 Test: blockdev writev readv block ...passed 00:08:33.056 Test: blockdev writev readv size > 128k ...passed 00:08:33.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.056 Test: blockdev comparev and writev ...[2024-11-28 00:02:47.651820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c960d000 len:0x1000 00:08:33.056 [2024-11-28 00:02:47.651857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev nvme passthru rw ...passed 00:08:33.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.056 Test: blockdev nvme admin passthru ...[2024-11-28 00:02:47.653002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.056 [2024-11-28 00:02:47.653032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.056 passed 00:08:33.056 Test: blockdev copy ...passed 00:08:33.056 Suite: bdevio tests on: Nvme1n1 00:08:33.056 Test: blockdev write read block ...passed 00:08:33.056 Test: blockdev write zeroes read block ...passed 00:08:33.319 Test: blockdev write zeroes read no split ...passed 00:08:33.319 Test: blockdev write zeroes read split ...passed 00:08:33.319 Test: blockdev write zeroes read split partial ...passed 00:08:33.319 Test: blockdev reset ...[2024-11-28 00:02:47.665914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:33.319 passed 00:08:33.319 Test: blockdev write read 8 blocks ...[2024-11-28 00:02:47.667339] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.319 passed 00:08:33.319 Test: blockdev write read size > 128k ...passed 00:08:33.319 Test: blockdev write read invalid size ...passed 00:08:33.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.319 Test: blockdev write read max offset ...passed 00:08:33.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.319 Test: blockdev writev readv 8 blocks ...passed 00:08:33.319 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.319 Test: blockdev writev readv block ...passed 00:08:33.319 Test: blockdev writev readv size > 128k ...passed 00:08:33.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.319 Test: blockdev comparev and writev ...[2024-11-28 00:02:47.676646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a32000 len:0x1000 00:08:33.319 [2024-11-28 00:02:47.676679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:33.320 passed 00:08:33.320 Test: blockdev nvme passthru rw ...passed 00:08:33.320 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.320 Test: blockdev nvme admin passthru ...[2024-11-28 00:02:47.677379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:33.320 [2024-11-28 00:02:47.677405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:33.320 passed 00:08:33.320 Test: blockdev copy ...passed 00:08:33.320 Suite: bdevio tests on: Nvme0n1p2 00:08:33.320 Test: blockdev write read block ...passed 00:08:33.320 Test: blockdev write zeroes read block ...passed 00:08:33.320 Test: blockdev write zeroes read no split ...passed 00:08:33.320 Test: blockdev write zeroes read split ...passed 00:08:33.320 Test: blockdev write zeroes read split partial ...passed 00:08:33.320 Test: blockdev reset ...[2024-11-28 00:02:47.703854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:33.320 passed 00:08:33.320 Test: blockdev write read 8 blocks ...[2024-11-28 00:02:47.706376] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.320 passed 00:08:33.320 Test: blockdev write read size > 128k ...passed 00:08:33.320 Test: blockdev write read invalid size ...passed 00:08:33.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.320 Test: blockdev write read max offset ...passed 00:08:33.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.320 Test: blockdev writev readv 8 blocks ...passed 00:08:33.320 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.320 Test: blockdev writev readv block ...passed 00:08:33.320 Test: blockdev writev readv size > 128k ...passed 00:08:33.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.320 Test: blockdev comparev and writev ...passed 00:08:33.320 Test: blockdev nvme passthru rw ...passed 00:08:33.320 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.320 Test: blockdev nvme admin passthru ...passed 00:08:33.320 Test: blockdev copy ...[2024-11-28 00:02:47.711921] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:33.320 separate metadata which is not supported yet. 00:08:33.320 passed 00:08:33.320 Suite: bdevio tests on: Nvme0n1p1 00:08:33.320 Test: blockdev write read block ...passed 00:08:33.320 Test: blockdev write zeroes read block ...passed 00:08:33.320 Test: blockdev write zeroes read no split ...passed 00:08:33.320 Test: blockdev write zeroes read split ...passed 00:08:33.320 Test: blockdev write zeroes read split partial ...passed 00:08:33.320 Test: blockdev reset ...[2024-11-28 00:02:47.724460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:33.320 passed 00:08:33.320 Test: blockdev write read 8 blocks ...[2024-11-28 00:02:47.725820] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.320 passed 00:08:33.320 Test: blockdev write read size > 128k ...passed 00:08:33.320 Test: blockdev write read invalid size ...passed 00:08:33.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:33.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:33.320 Test: blockdev write read max offset ...passed 00:08:33.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:33.320 Test: blockdev writev readv 8 blocks ...passed 00:08:33.320 Test: blockdev writev readv 30 x 1block ...passed 00:08:33.320 Test: blockdev writev readv block ...passed 00:08:33.320 Test: blockdev writev readv size > 128k ...passed 00:08:33.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:33.320 Test: blockdev comparev and writev ...passed 00:08:33.320 Test: blockdev nvme passthru rw ...passed 00:08:33.320 Test: blockdev nvme passthru vendor specific ...passed 00:08:33.320 Test: blockdev nvme admin passthru ...passed 00:08:33.320 Test: blockdev copy ...[2024-11-28 00:02:47.729884] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:33.320 separate metadata which is not supported yet. 00:08:33.320 passed 00:08:33.320 00:08:33.320 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.320 suites 7 7 n/a 0 0 00:08:33.320 tests 161 161 161 0 0 00:08:33.320 asserts 1006 1006 1006 0 n/a 00:08:33.320 00:08:33.320 Elapsed time = 0.685 seconds 00:08:33.320 0 00:08:33.320 00:02:47 -- bdev/blockdev.sh@293 -- # killprocess 73755 00:08:33.320 00:02:47 -- common/autotest_common.sh@936 -- # '[' -z 73755 ']' 00:08:33.320 00:02:47 -- common/autotest_common.sh@940 -- # kill -0 73755 00:08:33.320 00:02:47 -- common/autotest_common.sh@941 -- # uname 00:08:33.320 00:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.320 00:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73755 00:08:33.320 00:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.320 00:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.320 00:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73755' 00:08:33.320 killing process with pid 73755 00:08:33.320 00:02:47 -- common/autotest_common.sh@955 -- # kill 73755 00:08:33.320 00:02:47 -- common/autotest_common.sh@960 -- # wait 73755 00:08:33.583 00:02:47 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:33.583 00:08:33.583 real 0m1.474s 00:08:33.583 user 0m3.646s 00:08:33.583 sys 0m0.254s 00:08:33.583 00:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.583 00:02:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.583 ************************************ 00:08:33.583 END TEST bdev_bounds 00:08:33.583 ************************************ 00:08:33.583 00:02:47 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:33.583 00:02:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:33.583 00:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.583 00:02:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.583 ************************************ 00:08:33.583 START TEST bdev_nbd 00:08:33.583 ************************************ 00:08:33.583 00:02:47 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:33.583 00:02:47 -- bdev/blockdev.sh@298 -- # uname -s 00:08:33.583 00:02:47 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:33.583 00:02:47 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.583 00:02:47 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:33.583 00:02:47 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.583 00:02:47 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:33.583 00:02:47 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:33.583 00:02:47 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:33.583 00:02:47 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:33.583 00:02:47 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:33.583 00:02:47 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:33.583 00:02:47 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.583 00:02:47 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:33.583 00:02:47 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.583 00:02:47 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:33.583 00:02:47 -- bdev/blockdev.sh@316 -- # nbd_pid=73809 00:08:33.583 00:02:47 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:33.583 00:02:47 -- bdev/blockdev.sh@318 -- # waitforlisten 73809 /var/tmp/spdk-nbd.sock 00:08:33.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:33.583 00:02:47 -- common/autotest_common.sh@829 -- # '[' -z 73809 ']' 00:08:33.583 00:02:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:33.583 00:02:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.583 00:02:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:33.583 00:02:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.583 00:02:47 -- common/autotest_common.sh@10 -- # set +x 00:08:33.583 00:02:47 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:33.583 [2024-11-28 00:02:48.021551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.583 [2024-11-28 00:02:48.021772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.583 [2024-11-28 00:02:48.166928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.845 [2024-11-28 00:02:48.198397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.417 00:02:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.417 00:02:48 -- common/autotest_common.sh@862 -- # return 0 00:08:34.417 00:02:48 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.417 00:02:48 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@24 -- # local i 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.418 00:02:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:34.679 00:02:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:34.679 00:02:49 -- common/autotest_common.sh@867 -- # local i 00:08:34.679 00:02:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:34.679 00:02:49 -- common/autotest_common.sh@871 -- # break 00:08:34.679 00:02:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.679 1+0 records in 00:08:34.679 1+0 records out 00:08:34.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450655 s, 9.1 MB/s 00:08:34.679 00:02:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.679 00:02:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:34.679 00:02:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.679 00:02:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.679 00:02:49 -- common/autotest_common.sh@887 -- # return 0 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:34.679 00:02:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:34.679 00:02:49 -- common/autotest_common.sh@867 -- # local i 00:08:34.679 00:02:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:34.679 00:02:49 -- common/autotest_common.sh@871 -- # break 00:08:34.679 00:02:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.679 00:02:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.679 1+0 records in 00:08:34.679 1+0 records out 00:08:34.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342976 s, 11.9 MB/s 00:08:34.679 00:02:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.679 00:02:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:34.679 00:02:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.679 00:02:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.679 00:02:49 -- common/autotest_common.sh@887 -- # return 0 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.679 00:02:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:34.941 00:02:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:34.941 00:02:49 -- common/autotest_common.sh@867 -- # local i 00:08:34.941 00:02:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.941 00:02:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.941 00:02:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:34.941 00:02:49 -- common/autotest_common.sh@871 -- # break 00:08:34.941 00:02:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.941 00:02:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.941 00:02:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.941 1+0 records in 00:08:34.941 1+0 records out 00:08:34.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306791 s, 13.4 MB/s 00:08:34.941 00:02:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.941 00:02:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:34.941 00:02:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.941 00:02:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.941 00:02:49 -- common/autotest_common.sh@887 -- # return 0 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:34.941 00:02:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:35.203 00:02:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:35.203 00:02:49 -- common/autotest_common.sh@867 -- # local i 00:08:35.203 00:02:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.203 00:02:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.203 00:02:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:35.203 00:02:49 -- common/autotest_common.sh@871 -- # break 00:08:35.203 00:02:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.203 00:02:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.203 00:02:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.203 1+0 records in 00:08:35.203 1+0 records out 00:08:35.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449584 s, 9.1 MB/s 00:08:35.203 00:02:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.203 00:02:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:35.203 00:02:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.203 00:02:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.203 00:02:49 -- common/autotest_common.sh@887 -- # return 0 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.203 00:02:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:35.466 00:02:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:35.466 00:02:49 -- common/autotest_common.sh@867 -- # local i 00:08:35.466 00:02:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.466 00:02:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.466 00:02:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:35.466 00:02:49 -- common/autotest_common.sh@871 -- # break 00:08:35.466 00:02:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.466 00:02:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.466 00:02:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.466 1+0 records in 00:08:35.466 1+0 records out 00:08:35.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283586 s, 14.4 MB/s 00:08:35.466 00:02:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.466 00:02:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:35.466 00:02:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.466 00:02:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.466 00:02:49 -- common/autotest_common.sh@887 -- # return 0 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.466 00:02:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:35.728 00:02:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:35.728 00:02:50 -- common/autotest_common.sh@867 -- # local i 00:08:35.728 00:02:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.728 00:02:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.728 00:02:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:35.728 00:02:50 -- common/autotest_common.sh@871 -- # break 00:08:35.728 00:02:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.728 00:02:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.728 00:02:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.728 1+0 records in 00:08:35.728 1+0 records out 00:08:35.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395047 s, 10.4 MB/s 00:08:35.728 00:02:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.728 00:02:50 -- common/autotest_common.sh@884 -- # size=4096 00:08:35.728 00:02:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.728 00:02:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.728 00:02:50 -- common/autotest_common.sh@887 -- # return 0 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.728 00:02:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:35.990 00:02:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:35.990 00:02:50 -- common/autotest_common.sh@867 -- # local i 00:08:35.990 00:02:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.990 00:02:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.990 00:02:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:35.990 00:02:50 -- common/autotest_common.sh@871 -- # break 00:08:35.990 00:02:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.990 00:02:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.990 00:02:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.990 1+0 records in 00:08:35.990 1+0 records out 00:08:35.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371355 s, 11.0 MB/s 00:08:35.990 00:02:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.990 00:02:50 -- common/autotest_common.sh@884 -- # size=4096 00:08:35.990 00:02:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.990 00:02:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.990 00:02:50 -- common/autotest_common.sh@887 -- # return 0 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd0", 00:08:35.990 "bdev_name": "Nvme0n1p1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd1", 00:08:35.990 "bdev_name": "Nvme0n1p2" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd2", 00:08:35.990 "bdev_name": "Nvme1n1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd3", 00:08:35.990 "bdev_name": "Nvme2n1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd4", 00:08:35.990 "bdev_name": "Nvme2n2" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd5", 00:08:35.990 "bdev_name": "Nvme2n3" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd6", 00:08:35.990 "bdev_name": "Nvme3n1" 00:08:35.990 } 00:08:35.990 ]' 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd0", 00:08:35.990 "bdev_name": "Nvme0n1p1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd1", 00:08:35.990 "bdev_name": "Nvme0n1p2" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd2", 00:08:35.990 "bdev_name": "Nvme1n1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd3", 00:08:35.990 "bdev_name": "Nvme2n1" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd4", 00:08:35.990 "bdev_name": "Nvme2n2" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd5", 00:08:35.990 "bdev_name": "Nvme2n3" 00:08:35.990 }, 00:08:35.990 { 00:08:35.990 "nbd_device": "/dev/nbd6", 00:08:35.990 "bdev_name": "Nvme3n1" 00:08:35.990 } 00:08:35.990 ]' 00:08:35.990 00:02:50 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@51 -- # local i 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@41 -- # break 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.253 00:02:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@41 -- # break 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.515 00:02:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@41 -- # break 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@41 -- # break 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.777 00:02:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@41 -- # break 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.038 00:02:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@41 -- # break 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.299 00:02:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@41 -- # break 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.561 00:02:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@65 -- # true 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@65 -- # count=0 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@122 -- # count=0 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@127 -- # return 0 00:08:37.823 00:02:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@12 -- # local i 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:37.823 /dev/nbd0 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.823 00:02:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.823 00:02:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:37.823 00:02:52 -- common/autotest_common.sh@867 -- # local i 00:08:37.823 00:02:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:37.823 00:02:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:37.823 00:02:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:38.085 00:02:52 -- common/autotest_common.sh@871 -- # break 00:08:38.085 00:02:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.085 1+0 records in 00:08:38.085 1+0 records out 00:08:38.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416695 s, 9.8 MB/s 00:08:38.085 00:02:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.085 00:02:52 -- common/autotest_common.sh@884 -- # size=4096 00:08:38.085 00:02:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.085 00:02:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.085 00:02:52 -- common/autotest_common.sh@887 -- # return 0 00:08:38.085 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.085 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.085 00:02:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:38.085 /dev/nbd1 00:08:38.085 00:02:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.085 00:02:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.085 00:02:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:38.085 00:02:52 -- common/autotest_common.sh@867 -- # local i 00:08:38.085 00:02:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:38.085 00:02:52 -- common/autotest_common.sh@871 -- # break 00:08:38.085 00:02:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.085 00:02:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.085 1+0 records in 00:08:38.085 1+0 records out 00:08:38.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454279 s, 9.0 MB/s 00:08:38.085 00:02:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.085 00:02:52 -- common/autotest_common.sh@884 -- # size=4096 00:08:38.085 00:02:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.086 00:02:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.086 00:02:52 -- common/autotest_common.sh@887 -- # return 0 00:08:38.086 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.086 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.086 00:02:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:38.348 /dev/nbd10 00:08:38.348 00:02:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:38.348 00:02:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:38.348 00:02:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:38.348 00:02:52 -- common/autotest_common.sh@867 -- # local i 00:08:38.348 00:02:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.348 00:02:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.348 00:02:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:38.348 00:02:52 -- common/autotest_common.sh@871 -- # break 00:08:38.348 00:02:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.348 00:02:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.348 00:02:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.348 1+0 records in 00:08:38.348 1+0 records out 00:08:38.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413119 s, 9.9 MB/s 00:08:38.348 00:02:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.348 00:02:52 -- common/autotest_common.sh@884 -- # size=4096 00:08:38.348 00:02:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.348 00:02:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.348 00:02:52 -- common/autotest_common.sh@887 -- # return 0 00:08:38.348 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.348 00:02:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.348 00:02:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:38.610 /dev/nbd11 00:08:38.610 00:02:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:38.610 00:02:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:38.610 00:02:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:38.610 00:02:53 -- common/autotest_common.sh@867 -- # local i 00:08:38.610 00:02:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.610 00:02:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.610 00:02:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:38.610 00:02:53 -- common/autotest_common.sh@871 -- # break 00:08:38.610 00:02:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.610 00:02:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.610 00:02:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.610 1+0 records in 00:08:38.610 1+0 records out 00:08:38.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697235 s, 5.9 MB/s 00:08:38.610 00:02:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.610 00:02:53 -- common/autotest_common.sh@884 -- # size=4096 00:08:38.610 00:02:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.610 00:02:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.610 00:02:53 -- common/autotest_common.sh@887 -- # return 0 00:08:38.610 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.610 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.610 00:02:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:38.872 /dev/nbd12 00:08:38.872 00:02:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:38.872 00:02:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:38.872 00:02:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:38.872 00:02:53 -- common/autotest_common.sh@867 -- # local i 00:08:38.872 00:02:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:38.872 00:02:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:38.872 00:02:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:38.872 00:02:53 -- common/autotest_common.sh@871 -- # break 00:08:38.872 00:02:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:38.872 00:02:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:38.872 00:02:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.872 1+0 records in 00:08:38.872 1+0 records out 00:08:38.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293249 s, 14.0 MB/s 00:08:38.872 00:02:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.872 00:02:53 -- common/autotest_common.sh@884 -- # size=4096 00:08:38.872 00:02:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.872 00:02:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:38.872 00:02:53 -- common/autotest_common.sh@887 -- # return 0 00:08:38.872 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.872 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:38.872 00:02:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:39.133 /dev/nbd13 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:39.133 00:02:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:39.133 00:02:53 -- common/autotest_common.sh@867 -- # local i 00:08:39.133 00:02:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:39.133 00:02:53 -- common/autotest_common.sh@871 -- # break 00:08:39.133 00:02:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.133 1+0 records in 00:08:39.133 1+0 records out 00:08:39.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509998 s, 8.0 MB/s 00:08:39.133 00:02:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.133 00:02:53 -- common/autotest_common.sh@884 -- # size=4096 00:08:39.133 00:02:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.133 00:02:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.133 00:02:53 -- common/autotest_common.sh@887 -- # return 0 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:39.133 /dev/nbd14 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:39.133 00:02:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:39.133 00:02:53 -- common/autotest_common.sh@867 -- # local i 00:08:39.133 00:02:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:39.133 00:02:53 -- common/autotest_common.sh@871 -- # break 00:08:39.133 00:02:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.133 00:02:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.133 1+0 records in 00:08:39.133 1+0 records out 00:08:39.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413451 s, 9.9 MB/s 00:08:39.133 00:02:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.133 00:02:53 -- common/autotest_common.sh@884 -- # size=4096 00:08:39.133 00:02:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.133 00:02:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.133 00:02:53 -- common/autotest_common.sh@887 -- # return 0 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.133 00:02:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:39.394 00:02:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.394 00:02:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.394 00:02:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.394 00:02:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd0", 00:08:39.394 "bdev_name": "Nvme0n1p1" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd1", 00:08:39.394 "bdev_name": "Nvme0n1p2" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd10", 00:08:39.394 "bdev_name": "Nvme1n1" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd11", 00:08:39.394 "bdev_name": "Nvme2n1" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd12", 00:08:39.394 "bdev_name": "Nvme2n2" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd13", 00:08:39.394 "bdev_name": "Nvme2n3" 00:08:39.394 }, 00:08:39.394 { 00:08:39.394 "nbd_device": "/dev/nbd14", 00:08:39.394 "bdev_name": "Nvme3n1" 00:08:39.394 } 00:08:39.394 ]' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd0", 00:08:39.395 "bdev_name": "Nvme0n1p1" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd1", 00:08:39.395 "bdev_name": "Nvme0n1p2" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd10", 00:08:39.395 "bdev_name": "Nvme1n1" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd11", 00:08:39.395 "bdev_name": "Nvme2n1" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd12", 00:08:39.395 "bdev_name": "Nvme2n2" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd13", 00:08:39.395 "bdev_name": "Nvme2n3" 00:08:39.395 }, 00:08:39.395 { 00:08:39.395 "nbd_device": "/dev/nbd14", 00:08:39.395 "bdev_name": "Nvme3n1" 00:08:39.395 } 00:08:39.395 ]' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:39.395 /dev/nbd1 00:08:39.395 /dev/nbd10 00:08:39.395 /dev/nbd11 00:08:39.395 /dev/nbd12 00:08:39.395 /dev/nbd13 00:08:39.395 /dev/nbd14' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:39.395 /dev/nbd1 00:08:39.395 /dev/nbd10 00:08:39.395 /dev/nbd11 00:08:39.395 /dev/nbd12 00:08:39.395 /dev/nbd13 00:08:39.395 /dev/nbd14' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@65 -- # count=7 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@95 -- # count=7 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:39.395 256+0 records in 00:08:39.395 256+0 records out 00:08:39.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752307 s, 139 MB/s 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.395 00:02:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:39.657 256+0 records in 00:08:39.657 256+0 records out 00:08:39.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0774079 s, 13.5 MB/s 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:39.657 256+0 records in 00:08:39.657 256+0 records out 00:08:39.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.078349 s, 13.4 MB/s 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:39.657 256+0 records in 00:08:39.657 256+0 records out 00:08:39.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0861896 s, 12.2 MB/s 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.657 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:39.919 256+0 records in 00:08:39.919 256+0 records out 00:08:39.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0775291 s, 13.5 MB/s 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:39.919 256+0 records in 00:08:39.919 256+0 records out 00:08:39.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0749655 s, 14.0 MB/s 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:39.919 256+0 records in 00:08:39.919 256+0 records out 00:08:39.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0753095 s, 13.9 MB/s 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.919 00:02:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:40.180 256+0 records in 00:08:40.180 256+0 records out 00:08:40.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760838 s, 13.8 MB/s 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:40.180 00:02:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.181 00:02:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.181 00:02:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:40.181 00:02:54 -- bdev/nbd_common.sh@51 -- # local i 00:08:40.181 00:02:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.181 00:02:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@41 -- # break 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.439 00:02:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@41 -- # break 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.696 00:02:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@41 -- # break 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.955 00:02:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@41 -- # break 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.213 00:02:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@41 -- # break 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.472 00:02:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@41 -- # break 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.472 00:02:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@65 -- # true 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@104 -- # count=0 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@109 -- # return 0 00:08:41.732 00:02:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:41.732 00:02:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:41.991 malloc_lvol_verify 00:08:41.991 00:02:56 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:42.249 6acffbb0-d02c-49b9-b173-af370e749033 00:08:42.249 00:02:56 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:42.508 cc17502f-ef24-4c38-9742-16fe9c2f3d2e 00:08:42.508 00:02:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:42.508 /dev/nbd0 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:42.508 mke2fs 1.47.0 (5-Feb-2023) 00:08:42.508 Discarding device blocks: 0/4096 done 00:08:42.508 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:42.508 00:08:42.508 Allocating group tables: 0/1 done 00:08:42.508 Writing inode tables: 0/1 done 00:08:42.508 Creating journal (1024 blocks): done 00:08:42.508 Writing superblocks and filesystem accounting information: 0/1 done 00:08:42.508 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.508 00:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@41 -- # break 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:42.767 00:02:57 -- bdev/nbd_common.sh@147 -- # return 0 00:08:42.767 00:02:57 -- bdev/blockdev.sh@324 -- # killprocess 73809 00:08:42.767 00:02:57 -- common/autotest_common.sh@936 -- # '[' -z 73809 ']' 00:08:42.767 00:02:57 -- common/autotest_common.sh@940 -- # kill -0 73809 00:08:42.767 00:02:57 -- common/autotest_common.sh@941 -- # uname 00:08:42.767 00:02:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.767 00:02:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73809 00:08:42.767 killing process with pid 73809 00:08:42.767 00:02:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.768 00:02:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.768 00:02:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73809' 00:08:42.768 00:02:57 -- common/autotest_common.sh@955 -- # kill 73809 00:08:42.768 00:02:57 -- common/autotest_common.sh@960 -- # wait 73809 00:08:43.027 ************************************ 00:08:43.027 END TEST bdev_nbd 00:08:43.027 ************************************ 00:08:43.027 00:02:57 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:43.027 00:08:43.027 real 0m9.528s 00:08:43.027 user 0m13.904s 00:08:43.027 sys 0m3.243s 00:08:43.027 00:02:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.027 00:02:57 -- common/autotest_common.sh@10 -- # set +x 00:08:43.027 00:02:57 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:43.027 00:02:57 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:43.027 00:02:57 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:43.027 skipping fio tests on NVMe due to multi-ns failures. 00:08:43.027 00:02:57 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:43.027 00:02:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.027 00:02:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.027 00:02:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:43.027 00:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.027 00:02:57 -- common/autotest_common.sh@10 -- # set +x 00:08:43.027 ************************************ 00:08:43.027 START TEST bdev_verify 00:08:43.027 ************************************ 00:08:43.027 00:02:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.027 [2024-11-28 00:02:57.583413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.027 [2024-11-28 00:02:57.583505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74204 ] 00:08:43.286 [2024-11-28 00:02:57.731587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.287 [2024-11-28 00:02:57.763128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.287 [2024-11-28 00:02:57.763310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.547 Running I/O for 5 seconds... 00:08:48.825 00:08:48.825 Latency(us) 00:08:48.825 [2024-11-28T00:03:03.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x5e800 00:08:48.825 Nvme0n1p1 : 5.05 2436.36 9.52 0.00 0.00 52308.44 18955.03 60898.07 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x5e800 length 0x5e800 00:08:48.825 Nvme0n1p1 : 5.06 2451.08 9.57 0.00 0.00 52055.42 7410.61 64931.05 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x5e7ff 00:08:48.825 Nvme0n1p2 : 5.05 2435.64 9.51 0.00 0.00 52281.26 18148.43 58074.98 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:48.825 Nvme0n1p2 : 5.06 2450.41 9.57 0.00 0.00 52021.12 7813.91 61704.66 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0xa0000 00:08:48.825 Nvme1n1 : 5.06 2440.95 9.53 0.00 0.00 52172.88 3012.14 53638.70 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0xa0000 length 0xa0000 00:08:48.825 Nvme1n1 : 5.06 2457.73 9.60 0.00 0.00 51747.17 3327.21 54041.99 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x80000 00:08:48.825 Nvme2n1 : 5.06 2440.00 9.53 0.00 0.00 52088.74 4184.22 54041.99 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x80000 length 0x80000 00:08:48.825 Nvme2n1 : 5.06 2457.00 9.60 0.00 0.00 51700.04 3982.57 56058.49 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x80000 00:08:48.825 Nvme2n2 : 5.06 2447.09 9.56 0.00 0.00 51940.00 1764.43 54445.29 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x80000 length 0x80000 00:08:48.825 Nvme2n2 : 5.07 2455.14 9.59 0.00 0.00 51668.83 7057.72 57268.38 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x80000 00:08:48.825 Nvme2n3 : 5.07 2445.31 9.55 0.00 0.00 51898.66 4763.96 53235.40 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x80000 length 0x80000 00:08:48.825 Nvme2n3 : 5.07 2453.24 9.58 0.00 0.00 51642.41 10435.35 54445.29 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x0 length 0x20000 00:08:48.825 Nvme3n1 : 5.07 2443.46 9.54 0.00 0.00 51870.11 7965.14 53235.40 00:08:48.825 [2024-11-28T00:03:03.427Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:48.825 Verification LBA range: start 0x20000 length 0x20000 00:08:48.825 Nvme3n1 : 5.08 2451.67 9.58 0.00 0.00 51626.29 12855.14 53638.70 00:08:48.825 [2024-11-28T00:03:03.427Z] =================================================================================================================== 00:08:48.825 [2024-11-28T00:03:03.427Z] Total : 34265.08 133.85 0.00 0.00 51929.27 1764.43 64931.05 00:08:54.106 00:08:54.106 real 0m10.416s 00:08:54.106 user 0m20.055s 00:08:54.106 sys 0m0.245s 00:08:54.106 00:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.106 ************************************ 00:08:54.106 END TEST bdev_verify 00:08:54.106 ************************************ 00:08:54.106 00:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.106 00:03:07 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:54.106 00:03:07 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:54.106 00:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.106 00:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.106 ************************************ 00:08:54.106 START TEST bdev_verify_big_io 00:08:54.106 ************************************ 00:08:54.106 00:03:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:54.106 [2024-11-28 00:03:08.071565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:54.106 [2024-11-28 00:03:08.071679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74322 ] 00:08:54.106 [2024-11-28 00:03:08.220965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.106 [2024-11-28 00:03:08.253307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.106 [2024-11-28 00:03:08.253415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.106 Running I/O for 5 seconds... 00:09:00.698 00:09:00.698 Latency(us) 00:09:00.698 [2024-11-28T00:03:15.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0x0 length 0x5e80 00:09:00.698 Nvme0n1p1 : 5.46 222.31 13.89 0.00 0.00 565692.70 52428.80 851766.35 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0x5e80 length 0x5e80 00:09:00.698 Nvme0n1p1 : 5.40 257.56 16.10 0.00 0.00 480334.41 45572.73 771106.66 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0x0 length 0x5e7f 00:09:00.698 Nvme0n1p2 : 5.46 222.22 13.89 0.00 0.00 557083.91 53638.70 780785.82 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:00.698 Nvme0n1p2 : 5.40 257.47 16.09 0.00 0.00 473433.25 46379.32 713031.68 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0x0 length 0xa000 00:09:00.698 Nvme1n1 : 5.47 222.07 13.88 0.00 0.00 548425.61 56058.49 709805.29 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.698 Verification LBA range: start 0xa000 length 0xa000 00:09:00.698 Nvme1n1 : 5.43 263.33 16.46 0.00 0.00 457169.12 26012.75 651730.31 00:09:00.698 [2024-11-28T00:03:15.300Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x0 length 0x8000 00:09:00.699 Nvme2n1 : 5.47 222.00 13.87 0.00 0.00 539830.77 56865.08 642051.15 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x8000 length 0x8000 00:09:00.699 Nvme2n1 : 5.46 270.55 16.91 0.00 0.00 438433.11 30650.68 583976.17 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x0 length 0x8000 00:09:00.699 Nvme2n2 : 5.53 226.78 14.17 0.00 0.00 519194.63 54445.29 609787.27 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x8000 length 0x8000 00:09:00.699 Nvme2n2 : 5.50 284.90 17.81 0.00 0.00 410573.12 24601.21 554938.68 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x0 length 0x8000 00:09:00.699 Nvme2n3 : 5.54 243.53 15.22 0.00 0.00 480939.81 2659.25 558165.07 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x8000 length 0x8000 00:09:00.699 Nvme2n3 : 5.53 308.59 19.29 0.00 0.00 374490.53 563.99 777559.43 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x0 length 0x2000 00:09:00.699 Nvme3n1 : 5.54 250.56 15.66 0.00 0.00 460067.99 2003.89 890483.00 00:09:00.699 [2024-11-28T00:03:15.301Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:00.699 Verification LBA range: start 0x2000 length 0x2000 00:09:00.699 Nvme3n1 : 5.35 252.00 15.75 0.00 0.00 496581.93 71383.83 838860.80 00:09:00.699 [2024-11-28T00:03:15.301Z] =================================================================================================================== 00:09:00.699 [2024-11-28T00:03:15.301Z] Total : 3503.88 218.99 0.00 0.00 480351.07 563.99 890483.00 00:09:00.960 00:09:00.960 real 0m7.484s 00:09:00.960 user 0m14.236s 00:09:00.960 sys 0m0.224s 00:09:00.960 00:03:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.960 ************************************ 00:09:00.960 END TEST bdev_verify_big_io 00:09:00.960 ************************************ 00:09:00.960 00:03:15 -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 00:03:15 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.960 00:03:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:00.960 00:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.960 00:03:15 -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 ************************************ 00:09:00.960 START TEST bdev_write_zeroes 00:09:00.960 ************************************ 00:09:00.960 00:03:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.222 [2024-11-28 00:03:15.616837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.222 [2024-11-28 00:03:15.616952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74420 ] 00:09:01.222 [2024-11-28 00:03:15.765890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.222 [2024-11-28 00:03:15.800970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.794 Running I/O for 1 seconds... 00:09:02.734 00:09:02.735 Latency(us) 00:09:02.735 [2024-11-28T00:03:17.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme0n1p1 : 1.01 7153.08 27.94 0.00 0.00 17786.58 5772.21 156479.80 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme0n1p2 : 1.02 7049.96 27.54 0.00 0.00 18018.12 6175.51 156479.80 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme1n1 : 1.02 8179.33 31.95 0.00 0.00 15508.19 8670.92 134701.69 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme2n1 : 1.02 8107.29 31.67 0.00 0.00 15619.55 8771.74 138734.67 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme2n2 : 1.02 8098.10 31.63 0.00 0.00 15614.77 8368.44 138734.67 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme2n3 : 1.02 8145.92 31.82 0.00 0.00 15534.65 7158.55 138734.67 00:09:02.735 [2024-11-28T00:03:17.337Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:02.735 Nvme3n1 : 1.02 8127.92 31.75 0.00 0.00 15527.25 6805.66 138734.67 00:09:02.735 [2024-11-28T00:03:17.337Z] =================================================================================================================== 00:09:02.735 [2024-11-28T00:03:17.337Z] Total : 54861.60 214.30 0.00 0.00 16164.84 5772.21 156479.80 00:09:02.995 00:09:02.995 real 0m1.832s 00:09:02.995 user 0m1.555s 00:09:02.995 sys 0m0.166s 00:09:02.995 00:03:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.995 ************************************ 00:09:02.995 END TEST bdev_write_zeroes 00:09:02.995 ************************************ 00:09:02.995 00:03:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.995 00:03:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.995 00:03:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:02.995 00:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.995 00:03:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.995 ************************************ 00:09:02.995 START TEST bdev_json_nonenclosed 00:09:02.995 ************************************ 00:09:02.995 00:03:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.995 [2024-11-28 00:03:17.509432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.995 [2024-11-28 00:03:17.509548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74462 ] 00:09:03.257 [2024-11-28 00:03:17.653626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.257 [2024-11-28 00:03:17.688962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.257 [2024-11-28 00:03:17.689119] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:03.257 [2024-11-28 00:03:17.689139] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.257 00:09:03.257 real 0m0.319s 00:09:03.257 user 0m0.131s 00:09:03.257 sys 0m0.084s 00:09:03.257 00:03:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.257 ************************************ 00:09:03.257 END TEST bdev_json_nonenclosed 00:09:03.257 00:03:17 -- common/autotest_common.sh@10 -- # set +x 00:09:03.257 ************************************ 00:09:03.257 00:03:17 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:03.257 00:03:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:03.257 00:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.257 00:03:17 -- common/autotest_common.sh@10 -- # set +x 00:09:03.257 ************************************ 00:09:03.257 START TEST bdev_json_nonarray 00:09:03.257 ************************************ 00:09:03.257 00:03:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:03.518 [2024-11-28 00:03:17.873883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.518 [2024-11-28 00:03:17.873994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74482 ] 00:09:03.518 [2024-11-28 00:03:18.017073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.518 [2024-11-28 00:03:18.054810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.518 [2024-11-28 00:03:18.054975] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:03.518 [2024-11-28 00:03:18.055000] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.779 00:09:03.779 real 0m0.312s 00:09:03.779 user 0m0.119s 00:09:03.779 sys 0m0.090s 00:09:03.779 00:03:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.779 ************************************ 00:09:03.779 END TEST bdev_json_nonarray 00:09:03.779 ************************************ 00:09:03.779 00:03:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 00:03:18 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:03.779 00:03:18 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:03.779 00:03:18 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:03.779 00:03:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.779 00:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.779 00:03:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 ************************************ 00:09:03.779 START TEST bdev_gpt_uuid 00:09:03.779 ************************************ 00:09:03.779 00:03:18 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:03.779 00:03:18 -- bdev/blockdev.sh@612 -- # local bdev 00:09:03.779 00:03:18 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:03.779 00:03:18 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=74508 00:09:03.779 00:03:18 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:03.779 00:03:18 -- bdev/blockdev.sh@47 -- # waitforlisten 74508 00:09:03.779 00:03:18 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:03.779 00:03:18 -- common/autotest_common.sh@829 -- # '[' -z 74508 ']' 00:09:03.779 00:03:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.779 00:03:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.779 00:03:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.779 00:03:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.779 00:03:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 [2024-11-28 00:03:18.245521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.779 [2024-11-28 00:03:18.245914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74508 ] 00:09:04.040 [2024-11-28 00:03:18.389909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.040 [2024-11-28 00:03:18.425591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.040 [2024-11-28 00:03:18.425786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.611 00:03:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.611 00:03:19 -- common/autotest_common.sh@862 -- # return 0 00:09:04.611 00:03:19 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:04.611 00:03:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.611 00:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:04.872 Some configs were skipped because the RPC state that can call them passed over. 00:09:04.872 00:03:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.872 00:03:19 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:04.872 00:03:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.872 00:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:04.872 00:03:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.872 00:03:19 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:04.872 00:03:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.872 00:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:04.872 00:03:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.872 00:03:19 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:04.872 { 00:09:04.872 "name": "Nvme0n1p1", 00:09:04.872 "aliases": [ 00:09:04.872 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:04.872 ], 00:09:04.872 "product_name": "GPT Disk", 00:09:04.872 "block_size": 4096, 00:09:04.872 "num_blocks": 774144, 00:09:04.872 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:04.872 "md_size": 64, 00:09:04.872 "md_interleave": false, 00:09:04.872 "dif_type": 0, 00:09:04.872 "assigned_rate_limits": { 00:09:04.872 "rw_ios_per_sec": 0, 00:09:04.872 "rw_mbytes_per_sec": 0, 00:09:04.872 "r_mbytes_per_sec": 0, 00:09:04.872 "w_mbytes_per_sec": 0 00:09:04.872 }, 00:09:04.872 "claimed": false, 00:09:04.872 "zoned": false, 00:09:04.872 "supported_io_types": { 00:09:04.872 "read": true, 00:09:04.872 "write": true, 00:09:04.872 "unmap": true, 00:09:04.872 "write_zeroes": true, 00:09:04.872 "flush": true, 00:09:04.872 "reset": true, 00:09:04.873 "compare": true, 00:09:04.873 "compare_and_write": false, 00:09:04.873 "abort": true, 00:09:04.873 "nvme_admin": false, 00:09:04.873 "nvme_io": false 00:09:04.873 }, 00:09:04.873 "driver_specific": { 00:09:04.873 "gpt": { 00:09:04.873 "base_bdev": "Nvme0n1", 00:09:04.873 "offset_blocks": 256, 00:09:04.873 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:04.873 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:04.873 "partition_name": "SPDK_TEST_first" 00:09:04.873 } 00:09:04.873 } 00:09:04.873 } 00:09:04.873 ]' 00:09:04.873 00:03:19 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:04.873 00:03:19 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:04.873 00:03:19 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:05.135 00:03:19 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:05.135 00:03:19 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:05.135 00:03:19 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:05.135 00:03:19 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:05.135 00:03:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.135 00:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:05.135 00:03:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.135 00:03:19 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:05.135 { 00:09:05.135 "name": "Nvme0n1p2", 00:09:05.135 "aliases": [ 00:09:05.135 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:05.135 ], 00:09:05.135 "product_name": "GPT Disk", 00:09:05.135 "block_size": 4096, 00:09:05.135 "num_blocks": 774143, 00:09:05.135 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:05.135 "md_size": 64, 00:09:05.135 "md_interleave": false, 00:09:05.135 "dif_type": 0, 00:09:05.135 "assigned_rate_limits": { 00:09:05.135 "rw_ios_per_sec": 0, 00:09:05.135 "rw_mbytes_per_sec": 0, 00:09:05.135 "r_mbytes_per_sec": 0, 00:09:05.135 "w_mbytes_per_sec": 0 00:09:05.135 }, 00:09:05.135 "claimed": false, 00:09:05.135 "zoned": false, 00:09:05.135 "supported_io_types": { 00:09:05.135 "read": true, 00:09:05.135 "write": true, 00:09:05.135 "unmap": true, 00:09:05.135 "write_zeroes": true, 00:09:05.135 "flush": true, 00:09:05.135 "reset": true, 00:09:05.135 "compare": true, 00:09:05.135 "compare_and_write": false, 00:09:05.135 "abort": true, 00:09:05.135 "nvme_admin": false, 00:09:05.135 "nvme_io": false 00:09:05.135 }, 00:09:05.135 "driver_specific": { 00:09:05.135 "gpt": { 00:09:05.135 "base_bdev": "Nvme0n1", 00:09:05.135 "offset_blocks": 774400, 00:09:05.136 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:05.136 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:05.136 "partition_name": "SPDK_TEST_second" 00:09:05.136 } 00:09:05.136 } 00:09:05.136 } 00:09:05.136 ]' 00:09:05.136 00:03:19 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:05.136 00:03:19 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:05.136 00:03:19 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:05.136 00:03:19 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:05.136 00:03:19 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:05.136 00:03:19 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:05.136 00:03:19 -- bdev/blockdev.sh@629 -- # killprocess 74508 00:09:05.136 00:03:19 -- common/autotest_common.sh@936 -- # '[' -z 74508 ']' 00:09:05.136 00:03:19 -- common/autotest_common.sh@940 -- # kill -0 74508 00:09:05.136 00:03:19 -- common/autotest_common.sh@941 -- # uname 00:09:05.136 00:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:05.136 00:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74508 00:09:05.136 00:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:05.136 killing process with pid 74508 00:09:05.136 00:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:05.136 00:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74508' 00:09:05.136 00:03:19 -- common/autotest_common.sh@955 -- # kill 74508 00:09:05.136 00:03:19 -- common/autotest_common.sh@960 -- # wait 74508 00:09:05.397 00:09:05.397 real 0m1.798s 00:09:05.397 user 0m1.957s 00:09:05.397 sys 0m0.337s 00:09:05.397 00:03:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.397 ************************************ 00:09:05.397 END TEST bdev_gpt_uuid 00:09:05.397 ************************************ 00:09:05.397 00:03:19 -- common/autotest_common.sh@10 -- # set +x 00:09:05.659 00:03:20 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:05.659 00:03:20 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:05.659 00:03:20 -- bdev/blockdev.sh@809 -- # cleanup 00:09:05.659 00:03:20 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:05.659 00:03:20 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:05.659 00:03:20 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:05.659 00:03:20 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:05.659 00:03:20 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:05.659 00:03:20 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:05.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.182 Waiting for block devices as requested 00:09:06.182 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.182 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.444 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.444 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.747 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:11.747 00:03:26 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:11.747 00:03:26 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:11.747 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:11.747 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:11.747 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:11.747 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:11.747 00:03:26 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:11.747 00:09:11.747 real 0m51.301s 00:09:11.747 user 1m7.726s 00:09:11.747 sys 0m7.358s 00:09:11.747 00:03:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.747 ************************************ 00:09:11.747 END TEST blockdev_nvme_gpt 00:09:11.747 ************************************ 00:09:11.747 00:03:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 00:03:26 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:12.008 00:03:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.008 00:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.008 00:03:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 ************************************ 00:09:12.008 START TEST nvme 00:09:12.008 ************************************ 00:09:12.008 00:03:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:12.008 * Looking for test storage... 00:09:12.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.008 00:03:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:12.008 00:03:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:12.008 00:03:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:12.008 00:03:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:12.008 00:03:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:12.008 00:03:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:12.008 00:03:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:12.008 00:03:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:12.008 00:03:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:12.008 00:03:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.008 00:03:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:12.008 00:03:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:12.008 00:03:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:12.008 00:03:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:12.008 00:03:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:12.008 00:03:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:12.008 00:03:26 -- scripts/common.sh@344 -- # : 1 00:09:12.008 00:03:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:12.008 00:03:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.008 00:03:26 -- scripts/common.sh@364 -- # decimal 1 00:09:12.008 00:03:26 -- scripts/common.sh@352 -- # local d=1 00:09:12.008 00:03:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.008 00:03:26 -- scripts/common.sh@354 -- # echo 1 00:09:12.008 00:03:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:12.008 00:03:26 -- scripts/common.sh@365 -- # decimal 2 00:09:12.008 00:03:26 -- scripts/common.sh@352 -- # local d=2 00:09:12.008 00:03:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.008 00:03:26 -- scripts/common.sh@354 -- # echo 2 00:09:12.008 00:03:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:12.008 00:03:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:12.008 00:03:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:12.008 00:03:26 -- scripts/common.sh@367 -- # return 0 00:09:12.008 00:03:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.008 00:03:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:12.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.008 --rc genhtml_branch_coverage=1 00:09:12.008 --rc genhtml_function_coverage=1 00:09:12.009 --rc genhtml_legend=1 00:09:12.009 --rc geninfo_all_blocks=1 00:09:12.009 --rc geninfo_unexecuted_blocks=1 00:09:12.009 00:09:12.009 ' 00:09:12.009 00:03:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.009 --rc genhtml_branch_coverage=1 00:09:12.009 --rc genhtml_function_coverage=1 00:09:12.009 --rc genhtml_legend=1 00:09:12.009 --rc geninfo_all_blocks=1 00:09:12.009 --rc geninfo_unexecuted_blocks=1 00:09:12.009 00:09:12.009 ' 00:09:12.009 00:03:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.009 --rc genhtml_branch_coverage=1 00:09:12.009 --rc genhtml_function_coverage=1 00:09:12.009 --rc genhtml_legend=1 00:09:12.009 --rc geninfo_all_blocks=1 00:09:12.009 --rc geninfo_unexecuted_blocks=1 00:09:12.009 00:09:12.009 ' 00:09:12.009 00:03:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:12.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.009 --rc genhtml_branch_coverage=1 00:09:12.009 --rc genhtml_function_coverage=1 00:09:12.009 --rc genhtml_legend=1 00:09:12.009 --rc geninfo_all_blocks=1 00:09:12.009 --rc geninfo_unexecuted_blocks=1 00:09:12.009 00:09:12.009 ' 00:09:12.009 00:03:26 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:12.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.216 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.216 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.216 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.216 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.216 00:03:27 -- nvme/nvme.sh@79 -- # uname 00:09:13.216 00:03:27 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:13.216 00:03:27 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:13.216 00:03:27 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:13.216 00:03:27 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:13.216 00:03:27 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:13.216 00:03:27 -- common/autotest_common.sh@1055 -- # echo 0 00:09:13.216 Waiting for stub to ready for secondary processes... 00:09:13.216 00:03:27 -- common/autotest_common.sh@1057 -- # stubpid=75164 00:09:13.216 00:03:27 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:13.216 00:03:27 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:13.216 00:03:27 -- common/autotest_common.sh@1061 -- # [[ -e /proc/75164 ]] 00:09:13.216 00:03:27 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:13.216 00:03:27 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:13.478 [2024-11-28 00:03:27.829820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.478 [2024-11-28 00:03:27.829968] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.447 [2024-11-28 00:03:28.695745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.447 [2024-11-28 00:03:28.723198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.447 [2024-11-28 00:03:28.723810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.447 [2024-11-28 00:03:28.723811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.447 [2024-11-28 00:03:28.738245] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:14.447 [2024-11-28 00:03:28.752563] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:14.447 [2024-11-28 00:03:28.753059] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:14.447 [2024-11-28 00:03:28.756546] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:14.447 [2024-11-28 00:03:28.756847] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:14.447 [2024-11-28 00:03:28.756977] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:14.447 [2024-11-28 00:03:28.760407] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:14.447 [2024-11-28 00:03:28.760600] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:14.447 [2024-11-28 00:03:28.760735] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:14.447 [2024-11-28 00:03:28.763915] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:14.447 [2024-11-28 00:03:28.764090] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:14.447 [2024-11-28 00:03:28.764208] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:14.447 [2024-11-28 00:03:28.764323] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:14.447 [2024-11-28 00:03:28.764461] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:14.447 done. 00:09:14.447 00:03:28 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:14.447 00:03:28 -- common/autotest_common.sh@1064 -- # echo done. 00:09:14.447 00:03:28 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:14.447 00:03:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:14.447 00:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.447 00:03:28 -- common/autotest_common.sh@10 -- # set +x 00:09:14.447 ************************************ 00:09:14.447 START TEST nvme_reset 00:09:14.447 ************************************ 00:09:14.447 00:03:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:14.447 Initializing NVMe Controllers 00:09:14.447 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:14.447 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:14.447 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:14.447 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:14.447 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:14.447 00:09:14.447 real 0m0.194s 00:09:14.447 user 0m0.053s 00:09:14.447 sys 0m0.100s 00:09:14.447 ************************************ 00:09:14.447 END TEST nvme_reset 00:09:14.447 00:03:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:14.447 00:03:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.447 ************************************ 00:09:14.733 00:03:29 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:14.734 00:03:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.734 00:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.734 00:03:29 -- common/autotest_common.sh@10 -- # set +x 00:09:14.734 ************************************ 00:09:14.734 START TEST nvme_identify 00:09:14.734 ************************************ 00:09:14.734 00:03:29 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:14.734 00:03:29 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:14.734 00:03:29 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:14.734 00:03:29 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:14.734 00:03:29 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:14.734 00:03:29 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:14.734 00:03:29 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:14.734 00:03:29 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:14.734 00:03:29 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:14.734 00:03:29 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:14.734 00:03:29 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:14.734 00:03:29 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:14.734 00:03:29 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:14.734 [2024-11-28 00:03:29.318764] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 75194 terminated unexpected 00:09:14.734 ===================================================== 00:09:14.734 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:14.734 ===================================================== 00:09:14.734 Controller Capabilities/Features 00:09:14.734 ================================ 00:09:14.734 Vendor ID: 1b36 00:09:14.734 Subsystem Vendor ID: 1af4 00:09:14.734 Serial Number: 12343 00:09:14.734 Model Number: QEMU NVMe Ctrl 00:09:14.734 Firmware Version: 8.0.0 00:09:14.734 Recommended Arb Burst: 6 00:09:14.734 IEEE OUI Identifier: 00 54 52 00:09:14.734 Multi-path I/O 00:09:14.734 May have multiple subsystem ports: No 00:09:14.734 May have multiple controllers: Yes 00:09:14.734 Associated with SR-IOV VF: No 00:09:14.734 Max Data Transfer Size: 524288 00:09:14.734 Max Number of Namespaces: 256 00:09:14.734 Max Number of I/O Queues: 64 00:09:14.734 NVMe Specification Version (VS): 1.4 00:09:14.734 NVMe Specification Version (Identify): 1.4 00:09:14.734 Maximum Queue Entries: 2048 00:09:14.734 Contiguous Queues Required: Yes 00:09:14.734 Arbitration Mechanisms Supported 00:09:14.734 Weighted Round Robin: Not Supported 00:09:14.734 Vendor Specific: Not Supported 00:09:14.734 Reset Timeout: 7500 ms 00:09:14.734 Doorbell Stride: 4 bytes 00:09:14.734 NVM Subsystem Reset: Not Supported 00:09:14.734 Command Sets Supported 00:09:14.734 NVM Command Set: Supported 00:09:14.734 Boot Partition: Not Supported 00:09:14.734 Memory Page Size Minimum: 4096 bytes 00:09:14.734 Memory Page Size Maximum: 65536 bytes 00:09:14.734 Persistent Memory Region: Not Supported 00:09:14.734 Optional Asynchronous Events Supported 00:09:14.734 Namespace Attribute Notices: Supported 00:09:14.734 Firmware Activation Notices: Not Supported 00:09:14.734 ANA Change Notices: Not Supported 00:09:14.734 PLE Aggregate Log Change Notices: Not Supported 00:09:14.734 LBA Status Info Alert Notices: Not Supported 00:09:14.734 EGE Aggregate Log Change Notices: Not Supported 00:09:14.734 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.734 Zone Descriptor Change Notices: Not Supported 00:09:14.734 Discovery Log Change Notices: Not Supported 00:09:14.734 Controller Attributes 00:09:14.734 128-bit Host Identifier: Not Supported 00:09:14.734 Non-Operational Permissive Mode: Not Supported 00:09:14.734 NVM Sets: Not Supported 00:09:14.734 Read Recovery Levels: Not Supported 00:09:14.734 Endurance Groups: Supported 00:09:14.734 Predictable Latency Mode: Not Supported 00:09:14.734 Traffic Based Keep ALive: Not Supported 00:09:14.734 Namespace Granularity: Not Supported 00:09:14.734 SQ Associations: Not Supported 00:09:14.734 UUID List: Not Supported 00:09:14.734 Multi-Domain Subsystem: Not Supported 00:09:14.734 Fixed Capacity Management: Not Supported 00:09:14.734 Variable Capacity Management: Not Supported 00:09:14.734 Delete Endurance Group: Not Supported 00:09:14.734 Delete NVM Set: Not Supported 00:09:14.734 Extended LBA Formats Supported: Supported 00:09:14.734 Flexible Data Placement Supported: Supported 00:09:14.734 00:09:14.734 Controller Memory Buffer Support 00:09:14.734 ================================ 00:09:14.734 Supported: No 00:09:14.734 00:09:14.734 Persistent Memory Region Support 00:09:14.734 ================================ 00:09:14.734 Supported: No 00:09:14.734 00:09:14.734 Admin Command Set Attributes 00:09:14.734 ============================ 00:09:14.734 Security Send/Receive: Not Supported 00:09:14.734 Format NVM: Supported 00:09:14.734 Firmware Activate/Download: Not Supported 00:09:14.734 Namespace Management: Supported 00:09:14.734 Device Self-Test: Not Supported 00:09:14.734 Directives: Supported 00:09:14.734 NVMe-MI: Not Supported 00:09:14.734 Virtualization Management: Not Supported 00:09:14.734 Doorbell Buffer Config: Supported 00:09:14.734 Get LBA Status Capability: Not Supported 00:09:14.734 Command & Feature Lockdown Capability: Not Supported 00:09:14.734 Abort Command Limit: 4 00:09:14.734 Async Event Request Limit: 4 00:09:14.734 Number of Firmware Slots: N/A 00:09:14.734 Firmware Slot 1 Read-Only: N/A 00:09:14.734 Firmware Activation Without Reset: N/A 00:09:14.734 Multiple Update Detection Support: N/A 00:09:14.734 Firmware Update Granularity: No Information Provided 00:09:14.734 Per-Namespace SMART Log: Yes 00:09:14.734 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.734 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:14.734 Command Effects Log Page: Supported 00:09:14.734 Get Log Page Extended Data: Supported 00:09:14.734 Telemetry Log Pages: Not Supported 00:09:14.734 Persistent Event Log Pages: Not Supported 00:09:14.734 Supported Log Pages Log Page: May Support 00:09:14.734 Commands Supported & Effects Log Page: Not Supported 00:09:14.734 Feature Identifiers & Effects Log Page:May Support 00:09:14.734 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.734 Data Area 4 for Telemetry Log: Not Supported 00:09:14.734 Error Log Page Entries Supported: 1 00:09:14.734 Keep Alive: Not Supported 00:09:14.734 00:09:14.734 NVM Command Set Attributes 00:09:14.734 ========================== 00:09:14.734 Submission Queue Entry Size 00:09:14.734 Max: 64 00:09:14.734 Min: 64 00:09:14.734 Completion Queue Entry Size 00:09:14.734 Max: 16 00:09:14.734 Min: 16 00:09:14.734 Number of Namespaces: 256 00:09:14.734 Compare Command: Supported 00:09:14.734 Write Uncorrectable Command: Not Supported 00:09:14.734 Dataset Management Command: Supported 00:09:14.734 Write Zeroes Command: Supported 00:09:14.734 Set Features Save Field: Supported 00:09:14.734 Reservations: Not Supported 00:09:14.734 Timestamp: Supported 00:09:14.734 Copy: Supported 00:09:14.734 Volatile Write Cache: Present 00:09:14.734 Atomic Write Unit (Normal): 1 00:09:14.734 Atomic Write Unit (PFail): 1 00:09:14.734 Atomic Compare & Write Unit: 1 00:09:14.734 Fused Compare & Write: Not Supported 00:09:14.734 Scatter-Gather List 00:09:14.734 SGL Command Set: Supported 00:09:14.734 SGL Keyed: Not Supported 00:09:14.734 SGL Bit Bucket Descriptor: Not Supported 00:09:14.734 SGL Metadata Pointer: Not Supported 00:09:14.734 Oversized SGL: Not Supported 00:09:14.734 SGL Metadata Address: Not Supported 00:09:14.734 SGL Offset: Not Supported 00:09:14.734 Transport SGL Data Block: Not Supported 00:09:14.734 Replay Protected Memory Block: Not Supported 00:09:14.734 00:09:14.734 Firmware Slot Information 00:09:14.734 ========================= 00:09:14.734 Active slot: 1 00:09:14.734 Slot 1 Firmware Revision: 1.0 00:09:14.734 00:09:14.734 00:09:14.734 Commands Supported and Effects 00:09:14.734 ============================== 00:09:14.734 Admin Commands 00:09:14.734 -------------- 00:09:14.734 Delete I/O Submission Queue (00h): Supported 00:09:14.734 Create I/O Submission Queue (01h): Supported 00:09:14.734 Get Log Page (02h): Supported 00:09:14.734 Delete I/O Completion Queue (04h): Supported 00:09:14.734 Create I/O Completion Queue (05h): Supported 00:09:14.734 Identify (06h): Supported 00:09:14.734 Abort (08h): Supported 00:09:14.734 Set Features (09h): Supported 00:09:14.734 Get Features (0Ah): Supported 00:09:14.734 Asynchronous Event Request (0Ch): Supported 00:09:14.734 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.734 Directive Send (19h): Supported 00:09:14.734 Directive Receive (1Ah): Supported 00:09:14.734 Virtualization Management (1Ch): Supported 00:09:14.734 Doorbell Buffer Config (7Ch): Supported 00:09:14.735 Format NVM (80h): Supported LBA-Change 00:09:14.735 I/O Commands 00:09:14.735 ------------ 00:09:14.735 Flush (00h): Supported LBA-Change 00:09:14.735 Write (01h): Supported LBA-Change 00:09:14.735 Read (02h): Supported 00:09:14.735 Compare (05h): Supported 00:09:14.735 Write Zeroes (08h): Supported LBA-Change 00:09:14.735 Dataset Management (09h): Supported LBA-Change 00:09:14.735 Unknown (0Ch): Supported 00:09:14.735 Unknown (12h): Supported 00:09:14.735 Copy (19h): Supported LBA-Change 00:09:14.735 Unknown (1Dh): Supported LBA-Change 00:09:14.735 00:09:14.735 Error Log 00:09:14.735 ========= 00:09:14.735 00:09:14.735 Arbitration 00:09:14.735 =========== 00:09:14.735 Arbitration Burst: no limit 00:09:14.735 00:09:14.735 Power Management 00:09:14.735 ================ 00:09:14.735 Number of Power States: 1 00:09:14.735 Current Power State: Power State #0 00:09:14.735 Power State #0: 00:09:14.735 Max Power: 25.00 W 00:09:14.735 Non-Operational State: Operational 00:09:14.735 Entry Latency: 16 microseconds 00:09:14.735 Exit Latency: 4 microseconds 00:09:14.735 Relative Read Throughput: 0 00:09:14.735 Relative Read Latency: 0 00:09:14.735 Relative Write Throughput: 0 00:09:14.735 Relative Write Latency: 0 00:09:14.735 Idle Power: Not Reported 00:09:14.735 Active Power: Not Reported 00:09:14.735 Non-Operational Permissive Mode: Not Supported 00:09:14.735 00:09:14.735 Health Information 00:09:14.735 ================== 00:09:14.735 Critical Warnings: 00:09:14.735 Available Spare Space: OK 00:09:14.735 Temperature: OK 00:09:14.735 Device Reliability: OK 00:09:14.735 Read Only: No 00:09:14.735 Volatile Memory Backup: OK 00:09:14.735 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.735 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.735 Available Spare: 0% 00:09:14.735 Available Spare Threshold: 0% 00:09:14.735 Life Percentage Used: 0% 00:09:14.735 Data Units Read: 1296 00:09:14.735 Data Units Written: 601 00:09:14.735 Host Read Commands: 61435 00:09:14.735 Host Write Commands: 30216 00:09:14.735 Controller Busy Time: 0 minutes 00:09:14.735 Power Cycles: 0 00:09:14.735 Power On Hours: 0 hours 00:09:14.735 Unsafe Shutdowns: 0 00:09:14.735 Unrecoverable Media Errors: 0 00:09:14.735 Lifetime Error Log Entries: 0 00:09:14.735 Warning Temperature Time: 0 minutes 00:09:14.735 Critical Temperature Time: 0 minutes 00:09:14.735 00:09:14.735 Number of Queues 00:09:14.735 ================ 00:09:14.735 Number of I/O Submission Queues: 64 00:09:14.735 Number of I/O Completion Queues: 64 00:09:14.735 00:09:14.735 ZNS Specific Controller Data 00:09:14.735 ============================ 00:09:14.735 Zone Append Size Limit: 0 00:09:14.735 00:09:14.735 00:09:14.735 Active Namespaces 00:09:14.735 ================= 00:09:14.735 Namespace ID:1 00:09:14.735 Error Recovery Timeout: Unlimited 00:09:14.735 Command Set Identifier: NVM (00h) 00:09:14.735 Deallocate: Supported 00:09:14.735 Deallocated/Unwritten Error: Supported 00:09:14.735 Deallocated Read Value: All 0x00 00:09:14.735 Deallocate in Write Zeroes: Not Supported 00:09:14.735 Deallocated Guard Field: 0xFFFF 00:09:14.735 Flush: Supported 00:09:14.735 Reservation: Not Supported 00:09:14.735 Namespace Sharing Capabilities: Multiple Controllers 00:09:14.735 Size (in LBAs): 262144 (1GiB) 00:09:14.735 Capacity (in LBAs): 262144 (1GiB) 00:09:14.735 Utilization (in LBAs): 262144 (1GiB) 00:09:14.735 Thin Provisioning: Not Supported 00:09:14.735 Per-NS Atomic Units: No 00:09:14.735 Maximum Single Source Range Length: 128 00:09:14.735 Maximum Copy Length: 128 00:09:14.735 Maximum Source Range Count: 128 00:09:14.735 NGUID/EUI64 Never Reused: No 00:09:14.735 Namespace Write Protected: No 00:09:14.735 Endurance group ID: 1 00:09:14.735 Number of LBA Formats: 8 00:09:14.735 Current LBA Format: LBA Format #04 00:09:14.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.735 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.735 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.735 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.735 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.735 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.735 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.735 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.735 00:09:14.735 Get Feature FDP: 00:09:14.735 ================ 00:09:14.735 Enabled: Yes 00:09:14.735 FDP configuration index: 0 00:09:14.735 00:09:14.735 FDP configurations log page 00:09:14.735 =========================== 00:09:14.735 Number of FDP configurations: 1 00:09:14.735 Version: 0 00:09:14.735 Size: 112 00:09:14.735 FDP Configuration Descriptor: 0 00:09:14.735 Descriptor Size: 96 00:09:14.735 Reclaim Group Identifier format: 2 00:09:14.735 FDP Volatile Write Cache: Not Present 00:09:14.735 FDP Configuration: Valid 00:09:14.735 Vendor Specific Size: 0 00:09:14.735 Number of Reclaim Groups: 2 00:09:14.735 Number of Recalim Unit Handles: 8 00:09:14.735 Max Placement Identifiers: 128 00:09:14.735 Number of Namespaces Suppprted: 256 00:09:14.735 Reclaim unit Nominal Size: 6000000 bytes 00:09:14.735 Estimated Reclaim Unit Time Limit: Not Reported 00:09:14.735 RUH Desc #000: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #001: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #002: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #003: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #004: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #005: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #006: RUH Type: Initially Isolated 00:09:14.735 RUH Desc #007: RUH Type: Initially Isolated 00:09:14.735 00:09:14.735 FDP reclaim unit handle usage log page 00:09:14.735 =================================[2024-11-28 00:03:29.321481] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 75194 terminated unexpected 00:09:14.735 ===== 00:09:14.735 Number of Reclaim Unit Handles: 8 00:09:14.735 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:14.735 RUH Usage Desc #001: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #002: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #003: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #004: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #005: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #006: RUH Attributes: Unused 00:09:14.735 RUH Usage Desc #007: RUH Attributes: Unused 00:09:14.735 00:09:14.735 FDP statistics log page 00:09:14.735 ======================= 00:09:14.735 Host bytes with metadata written: 408064000 00:09:14.735 Media bytes with metadata written: 408158208 00:09:14.735 Media bytes erased: 0 00:09:14.735 00:09:14.735 FDP events log page 00:09:14.735 =================== 00:09:14.735 Number of FDP events: 0 00:09:14.735 00:09:14.735 ===================================================== 00:09:14.735 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:14.735 ===================================================== 00:09:14.735 Controller Capabilities/Features 00:09:14.735 ================================ 00:09:14.735 Vendor ID: 1b36 00:09:14.735 Subsystem Vendor ID: 1af4 00:09:14.735 Serial Number: 12340 00:09:14.735 Model Number: QEMU NVMe Ctrl 00:09:14.735 Firmware Version: 8.0.0 00:09:14.735 Recommended Arb Burst: 6 00:09:14.735 IEEE OUI Identifier: 00 54 52 00:09:14.735 Multi-path I/O 00:09:14.735 May have multiple subsystem ports: No 00:09:14.735 May have multiple controllers: No 00:09:14.735 Associated with SR-IOV VF: No 00:09:14.735 Max Data Transfer Size: 524288 00:09:14.735 Max Number of Namespaces: 256 00:09:14.735 Max Number of I/O Queues: 64 00:09:14.735 NVMe Specification Version (VS): 1.4 00:09:14.735 NVMe Specification Version (Identify): 1.4 00:09:14.735 Maximum Queue Entries: 2048 00:09:14.735 Contiguous Queues Required: Yes 00:09:14.735 Arbitration Mechanisms Supported 00:09:14.735 Weighted Round Robin: Not Supported 00:09:14.735 Vendor Specific: Not Supported 00:09:14.735 Reset Timeout: 7500 ms 00:09:14.735 Doorbell Stride: 4 bytes 00:09:14.735 NVM Subsystem Reset: Not Supported 00:09:14.735 Command Sets Supported 00:09:14.735 NVM Command Set: Supported 00:09:14.735 Boot Partition: Not Supported 00:09:14.735 Memory Page Size Minimum: 4096 bytes 00:09:14.735 Memory Page Size Maximum: 65536 bytes 00:09:14.735 Persistent Memory Region: Not Supported 00:09:14.735 Optional Asynchronous Events Supported 00:09:14.735 Namespace Attribute Notices: Supported 00:09:14.735 Firmware Activation Notices: Not Supported 00:09:14.735 ANA Change Notices: Not Supported 00:09:14.736 PLE Aggregate Log Change Notices: Not Supported 00:09:14.736 LBA Status Info Alert Notices: Not Supported 00:09:14.736 EGE Aggregate Log Change Notices: Not Supported 00:09:14.736 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.736 Zone Descriptor Change Notices: Not Supported 00:09:14.736 Discovery Log Change Notices: Not Supported 00:09:14.736 Controller Attributes 00:09:14.736 128-bit Host Identifier: Not Supported 00:09:14.736 Non-Operational Permissive Mode: Not Supported 00:09:14.736 NVM Sets: Not Supported 00:09:14.736 Read Recovery Levels: Not Supported 00:09:14.736 Endurance Groups: Not Supported 00:09:14.736 Predictable Latency Mode: Not Supported 00:09:14.736 Traffic Based Keep ALive: Not Supported 00:09:14.736 Namespace Granularity: Not Supported 00:09:14.736 SQ Associations: Not Supported 00:09:14.736 UUID List: Not Supported 00:09:14.736 Multi-Domain Subsystem: Not Supported 00:09:14.736 Fixed Capacity Management: Not Supported 00:09:14.736 Variable Capacity Management: Not Supported 00:09:14.736 Delete Endurance Group: Not Supported 00:09:14.736 Delete NVM Set: Not Supported 00:09:14.736 Extended LBA Formats Supported: Supported 00:09:14.736 Flexible Data Placement Supported: Not Supported 00:09:14.736 00:09:14.736 Controller Memory Buffer Support 00:09:14.736 ================================ 00:09:14.736 Supported: No 00:09:14.736 00:09:14.736 Persistent Memory Region Support 00:09:14.736 ================================ 00:09:14.736 Supported: No 00:09:14.736 00:09:14.736 Admin Command Set Attributes 00:09:14.736 ============================ 00:09:14.736 Security Send/Receive: Not Supported 00:09:14.736 Format NVM: Supported 00:09:14.736 Firmware Activate/Download: Not Supported 00:09:14.736 Namespace Management: Supported 00:09:14.736 Device Self-Test: Not Supported 00:09:14.736 Directives: Supported 00:09:14.736 NVMe-MI: Not Supported 00:09:14.736 Virtualization Management: Not Supported 00:09:14.736 Doorbell Buffer Config: Supported 00:09:14.736 Get LBA Status Capability: Not Supported 00:09:14.736 Command & Feature Lockdown Capability: Not Supported 00:09:14.736 Abort Command Limit: 4 00:09:14.736 Async Event Request Limit: 4 00:09:14.736 Number of Firmware Slots: N/A 00:09:14.736 Firmware Slot 1 Read-Only: N/A 00:09:14.736 Firmware Activation Without Reset: N/A 00:09:14.736 Multiple Update Detection Support: N/A 00:09:14.736 Firmware Update Granularity: No Information Provided 00:09:14.736 Per-Namespace SMART Log: Yes 00:09:14.736 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.736 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:14.736 Command Effects Log Page: Supported 00:09:14.736 Get Log Page Extended Data: Supported 00:09:14.736 Telemetry Log Pages: Not Supported 00:09:14.736 Persistent Event Log Pages: Not Supported 00:09:14.736 Supported Log Pages Log Page: May Support 00:09:14.736 Commands Supported & Effects Log Page: Not Supported 00:09:14.736 Feature Identifiers & Effects Log Page:May Support 00:09:14.736 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.736 Data Area 4 for Telemetry Log: Not Supported 00:09:14.736 Error Log Page Entries Supported: 1 00:09:14.736 Keep Alive: Not Supported 00:09:14.736 00:09:14.736 NVM Command Set Attributes 00:09:14.736 ========================== 00:09:14.736 Submission Queue Entry Size 00:09:14.736 Max: 64 00:09:14.736 Min: 64 00:09:14.736 Completion Queue Entry Size 00:09:14.736 Max: 16 00:09:14.736 Min: 16 00:09:14.736 Number of Namespaces: 256 00:09:14.736 Compare Command: Supported 00:09:14.736 Write Uncorrectable Command: Not Supported 00:09:14.736 Dataset Management Command: Supported 00:09:14.736 Write Zeroes Command: Supported 00:09:14.736 Set Features Save Field: Supported 00:09:14.736 Reservations: Not Supported 00:09:14.736 Timestamp: Supported 00:09:14.736 Copy: Supported 00:09:14.736 Volatile Write Cache: Present 00:09:14.736 Atomic Write Unit (Normal): 1 00:09:14.736 Atomic Write Unit (PFail): 1 00:09:14.736 Atomic Compare & Write Unit: 1 00:09:14.736 Fused Compare & Write: Not Supported 00:09:14.736 Scatter-Gather List 00:09:14.736 SGL Command Set: Supported 00:09:14.736 SGL Keyed: Not Supported 00:09:14.736 SGL Bit Bucket Descriptor: Not Supported 00:09:14.736 SGL Metadata Pointer: Not Supported 00:09:14.736 Oversized SGL: Not Supported 00:09:14.736 SGL Metadata Address: Not Supported 00:09:14.736 SGL Offset: Not Supported 00:09:14.736 Transport SGL Data Block: Not Supported 00:09:14.736 Replay Protected Memory Block: Not Supported 00:09:14.736 00:09:14.736 Firmware Slot Information 00:09:14.736 ========================= 00:09:14.736 Active slot: 1 00:09:14.736 Slot 1 Firmware Revision: 1.0 00:09:14.736 00:09:14.736 00:09:14.736 Commands Supported and Effects 00:09:14.736 ============================== 00:09:14.736 Admin Commands 00:09:14.736 -------------- 00:09:14.736 Delete I/O Submission Queue (00h): Supported 00:09:14.736 Create I/O Submission Queue (01h): Supported 00:09:14.736 Get Log Page (02h): Supported 00:09:14.736 Delete I/O Completion Queue (04h): Supported 00:09:14.736 Create I/O Completion Queue (05h): Supported 00:09:14.736 Identify (06h): Supported 00:09:14.736 Abort (08h): Supported 00:09:14.736 Set Features (09h): Supported 00:09:14.736 Get Features (0Ah): Supported 00:09:14.736 Asynchronous Event Request (0Ch): Supported 00:09:14.736 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.736 Directive Send (19h): Supported 00:09:14.736 Directive Receive (1Ah): Supported 00:09:14.736 Virtualization Management (1Ch): Supported 00:09:14.736 Doorbell Buffer Config (7Ch): Supported 00:09:14.736 Format NVM (80h): Supported LBA-Change 00:09:14.736 I/O Commands 00:09:14.736 ------------ 00:09:14.736 Flush (00h): Supported LBA-Change 00:09:14.736 Write (01h): Supported LBA-Change 00:09:14.736 Read (02h): Supported 00:09:14.736 Compare (05h): Supported 00:09:14.736 Write Zeroes (08h): Supported LBA-Change 00:09:14.736 Dataset Management (09h): Supported LBA-Change 00:09:14.736 Unknown (0Ch): Supported 00:09:14.736 [2024-11-28 00:03:29.322424] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 75194 terminated unexpected 00:09:14.736 Unknown (12h): Supported 00:09:14.736 Copy (19h): Supported LBA-Change 00:09:14.736 Unknown (1Dh): Supported LBA-Change 00:09:14.736 00:09:14.736 Error Log 00:09:14.736 ========= 00:09:14.736 00:09:14.736 Arbitration 00:09:14.736 =========== 00:09:14.736 Arbitration Burst: no limit 00:09:14.736 00:09:14.736 Power Management 00:09:14.736 ================ 00:09:14.736 Number of Power States: 1 00:09:14.736 Current Power State: Power State #0 00:09:14.736 Power State #0: 00:09:14.736 Max Power: 25.00 W 00:09:14.736 Non-Operational State: Operational 00:09:14.736 Entry Latency: 16 microseconds 00:09:14.736 Exit Latency: 4 microseconds 00:09:14.736 Relative Read Throughput: 0 00:09:14.736 Relative Read Latency: 0 00:09:14.736 Relative Write Throughput: 0 00:09:14.736 Relative Write Latency: 0 00:09:14.736 Idle Power: Not Reported 00:09:14.736 Active Power: Not Reported 00:09:14.736 Non-Operational Permissive Mode: Not Supported 00:09:14.736 00:09:14.736 Health Information 00:09:14.736 ================== 00:09:14.736 Critical Warnings: 00:09:14.736 Available Spare Space: OK 00:09:14.736 Temperature: OK 00:09:14.736 Device Reliability: OK 00:09:14.736 Read Only: No 00:09:14.736 Volatile Memory Backup: OK 00:09:14.736 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.736 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.736 Available Spare: 0% 00:09:14.736 Available Spare Threshold: 0% 00:09:14.736 Life Percentage Used: 0% 00:09:14.736 Data Units Read: 1809 00:09:14.736 Data Units Written: 835 00:09:14.736 Host Read Commands: 88590 00:09:14.736 Host Write Commands: 43979 00:09:14.736 Controller Busy Time: 0 minutes 00:09:14.736 Power Cycles: 0 00:09:14.736 Power On Hours: 0 hours 00:09:14.736 Unsafe Shutdowns: 0 00:09:14.736 Unrecoverable Media Errors: 0 00:09:14.736 Lifetime Error Log Entries: 0 00:09:14.736 Warning Temperature Time: 0 minutes 00:09:14.736 Critical Temperature Time: 0 minutes 00:09:14.736 00:09:14.736 Number of Queues 00:09:14.736 ================ 00:09:14.736 Number of I/O Submission Queues: 64 00:09:14.736 Number of I/O Completion Queues: 64 00:09:14.736 00:09:14.736 ZNS Specific Controller Data 00:09:14.736 ============================ 00:09:14.736 Zone Append Size Limit: 0 00:09:14.736 00:09:14.736 00:09:14.736 Active Namespaces 00:09:14.736 ================= 00:09:14.736 Namespace ID:1 00:09:14.737 Error Recovery Timeout: Unlimited 00:09:14.737 Command Set Identifier: NVM (00h) 00:09:14.737 Deallocate: Supported 00:09:14.737 Deallocated/Unwritten Error: Supported 00:09:14.737 Deallocated Read Value: All 0x00 00:09:14.737 Deallocate in Write Zeroes: Not Supported 00:09:14.737 Deallocated Guard Field: 0xFFFF 00:09:14.737 Flush: Supported 00:09:14.737 Reservation: Not Supported 00:09:14.737 Metadata Transferred as: Separate Metadata Buffer 00:09:14.737 Namespace Sharing Capabilities: Private 00:09:14.737 Size (in LBAs): 1548666 (5GiB) 00:09:14.737 Capacity (in LBAs): 1548666 (5GiB) 00:09:14.737 Utilization (in LBAs): 1548666 (5GiB) 00:09:14.737 Thin Provisioning: Not Supported 00:09:14.737 Per-NS Atomic Units: No 00:09:14.737 Maximum Single Source Range Length: 128 00:09:14.737 Maximum Copy Length: 128 00:09:14.737 Maximum Source Range Count: 128 00:09:14.737 NGUID/EUI64 Never Reused: No 00:09:14.737 Namespace Write Protected: No 00:09:14.737 Number of LBA Formats: 8 00:09:14.737 Current LBA Format: LBA Format #07 00:09:14.737 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.737 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.737 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.737 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.737 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.737 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.737 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.737 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.737 00:09:14.737 ===================================================== 00:09:14.737 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:14.737 ===================================================== 00:09:14.737 Controller Capabilities/Features 00:09:14.737 ================================ 00:09:14.737 Vendor ID: 1b36 00:09:14.737 Subsystem Vendor ID: 1af4 00:09:14.737 Serial Number: 12341 00:09:14.737 Model Number: QEMU NVMe Ctrl 00:09:14.737 Firmware Version: 8.0.0 00:09:14.737 Recommended Arb Burst: 6 00:09:14.737 IEEE OUI Identifier: 00 54 52 00:09:14.737 Multi-path I/O 00:09:14.737 May have multiple subsystem ports: No 00:09:14.737 May have multiple controllers: No 00:09:14.737 Associated with SR-IOV VF: No 00:09:14.737 Max Data Transfer Size: 524288 00:09:14.737 Max Number of Namespaces: 256 00:09:14.737 Max Number of I/O Queues: 64 00:09:14.737 NVMe Specification Version (VS): 1.4 00:09:14.737 NVMe Specification Version (Identify): 1.4 00:09:14.737 Maximum Queue Entries: 2048 00:09:14.737 Contiguous Queues Required: Yes 00:09:14.737 Arbitration Mechanisms Supported 00:09:14.737 Weighted Round Robin: Not Supported 00:09:14.737 Vendor Specific: Not Supported 00:09:14.737 Reset Timeout: 7500 ms 00:09:14.737 Doorbell Stride: 4 bytes 00:09:14.737 NVM Subsystem Reset: Not Supported 00:09:14.737 Command Sets Supported 00:09:14.737 NVM Command Set: Supported 00:09:14.737 Boot Partition: Not Supported 00:09:14.737 Memory Page Size Minimum: 4096 bytes 00:09:14.737 Memory Page Size Maximum: 65536 bytes 00:09:14.737 Persistent Memory Region: Not Supported 00:09:14.737 Optional Asynchronous Events Supported 00:09:14.737 Namespace Attribute Notices: Supported 00:09:14.737 Firmware Activation Notices: Not Supported 00:09:14.737 ANA Change Notices: Not Supported 00:09:14.737 PLE Aggregate Log Change Notices: Not Supported 00:09:14.737 LBA Status Info Alert Notices: Not Supported 00:09:14.737 EGE Aggregate Log Change Notices: Not Supported 00:09:14.737 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.737 Zone Descriptor Change Notices: Not Supported 00:09:14.737 Discovery Log Change Notices: Not Supported 00:09:14.737 Controller Attributes 00:09:14.737 128-bit Host Identifier: Not Supported 00:09:14.737 Non-Operational Permissive Mode: Not Supported 00:09:14.737 NVM Sets: Not Supported 00:09:14.737 Read Recovery Levels: Not Supported 00:09:14.737 Endurance Groups: Not Supported 00:09:14.737 Predictable Latency Mode: Not Supported 00:09:14.737 Traffic Based Keep ALive: Not Supported 00:09:14.737 Namespace Granularity: Not Supported 00:09:14.737 SQ Associations: Not Supported 00:09:14.737 UUID List: Not Supported 00:09:14.737 Multi-Domain Subsystem: Not Supported 00:09:14.737 Fixed Capacity Management: Not Supported 00:09:14.737 Variable Capacity Management: Not Supported 00:09:14.737 Delete Endurance Group: Not Supported 00:09:14.737 Delete NVM Set: Not Supported 00:09:14.737 Extended LBA Formats Supported: Supported 00:09:14.737 Flexible Data Placement Supported: Not Supported 00:09:14.737 00:09:14.737 Controller Memory Buffer Support 00:09:14.737 ================================ 00:09:14.737 Supported: No 00:09:14.737 00:09:14.737 Persistent Memory Region Support 00:09:14.737 ================================ 00:09:14.737 Supported: No 00:09:14.737 00:09:14.737 Admin Command Set Attributes 00:09:14.737 ============================ 00:09:14.737 Security Send/Receive: Not Supported 00:09:14.737 Format NVM: Supported 00:09:14.737 Firmware Activate/Download: Not Supported 00:09:14.737 Namespace Management: Supported 00:09:14.737 Device Self-Test: Not Supported 00:09:14.737 Directives: Supported 00:09:14.737 NVMe-MI: Not Supported 00:09:14.737 Virtualization Management: Not Supported 00:09:14.737 Doorbell Buffer Config: Supported 00:09:14.737 Get LBA Status Capability: Not Supported 00:09:14.737 Command & Feature Lockdown Capability: Not Supported 00:09:14.737 Abort Command Limit: 4 00:09:14.737 Async Event Request Limit: 4 00:09:14.737 Number of Firmware Slots: N/A 00:09:14.737 Firmware Slot 1 Read-Only: N/A 00:09:14.737 Firmware Activation Without Reset: N/A 00:09:14.737 Multiple Update Detection Support: N/A 00:09:14.737 Firmware Update Granularity: No Information Provided 00:09:14.737 Per-Namespace SMART Log: Yes 00:09:14.737 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.737 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:14.737 Command Effects Log Page: Supported 00:09:14.737 Get Log Page Extended Data: Supported 00:09:14.737 Telemetry Log Pages: Not Supported 00:09:14.737 Persistent Event Log Pages: Not Supported 00:09:14.737 Supported Log Pages Log Page: May Support 00:09:14.737 Commands Supported & Effects Log Page: Not Supported 00:09:14.737 Feature Identifiers & Effects Log Page:May Support 00:09:14.737 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.737 Data Area 4 for Telemetry Log: Not Supported 00:09:14.737 Error Log Page Entries Supported: 1 00:09:14.737 Keep Alive: Not Supported 00:09:14.737 00:09:14.737 NVM Command Set Attributes 00:09:14.737 ========================== 00:09:14.737 Submission Queue Entry Size 00:09:14.737 Max: 64 00:09:14.737 Min: 64 00:09:14.737 Completion Queue Entry Size 00:09:14.737 Max: 16 00:09:14.737 Min: 16 00:09:14.737 Number of Namespaces: 256 00:09:14.737 Compare Command: Supported 00:09:14.737 Write Uncorrectable Command: Not Supported 00:09:14.737 Dataset Management Command: Supported 00:09:14.737 Write Zeroes Command: Supported 00:09:14.737 Set Features Save Field: Supported 00:09:14.737 Reservations: Not Supported 00:09:14.737 Timestamp: Supported 00:09:14.737 Copy: Supported 00:09:14.737 Volatile Write Cache: Present 00:09:14.737 Atomic Write Unit (Normal): 1 00:09:14.737 Atomic Write Unit (PFail): 1 00:09:14.737 Atomic Compare & Write Unit: 1 00:09:14.737 Fused Compare & Write: Not Supported 00:09:14.737 Scatter-Gather List 00:09:14.737 SGL Command Set: Supported 00:09:14.737 SGL Keyed: Not Supported 00:09:14.737 SGL Bit Bucket Descriptor: Not Supported 00:09:14.737 SGL Metadata Pointer: Not Supported 00:09:14.737 Oversized SGL: Not Supported 00:09:14.737 SGL Metadata Address: Not Supported 00:09:14.737 SGL Offset: Not Supported 00:09:14.737 Transport SGL Data Block: Not Supported 00:09:14.737 Replay Protected Memory Block: Not Supported 00:09:14.737 00:09:14.737 Firmware Slot Information 00:09:14.737 ========================= 00:09:14.737 Active slot: 1 00:09:14.737 Slot 1 Firmware Revision: 1.0 00:09:14.737 00:09:14.737 00:09:14.737 Commands Supported and Effects 00:09:14.737 ============================== 00:09:14.737 Admin Commands 00:09:14.737 -------------- 00:09:14.737 Delete I/O Submission Queue (00h): Supported 00:09:14.737 Create I/O Submission Queue (01h): Supported 00:09:14.737 Get Log Page (02h): Supported 00:09:14.737 Delete I/O Completion Queue (04h): Supported 00:09:14.737 Create I/O Completion Queue (05h): Supported 00:09:14.737 Identify (06h): Supported 00:09:14.737 Abort (08h): Supported 00:09:14.737 Set Features (09h): Supported 00:09:14.737 Get Features (0Ah): Supported 00:09:14.737 Asynchronous Event Request (0Ch): Supported 00:09:14.737 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.738 Directive Send (19h): Supported 00:09:14.738 Directive Receive (1Ah): Supported 00:09:14.738 Virtualization Management (1Ch): Supported 00:09:14.738 Doorbell Buffer Config (7Ch): Supported 00:09:14.738 Format NVM (80h): Supported LBA-Change 00:09:14.738 I/O Commands 00:09:14.738 ------------ 00:09:14.738 Flush (00h): Supported LBA-Change 00:09:14.738 Write (01h): Supported LBA-Change 00:09:14.738 Read (02h): Supported 00:09:14.738 Compare (05h): Supported 00:09:14.738 Write Zeroes (08h): Supported LBA-Change 00:09:14.738 Dataset Management (09h): Supported LBA-Change 00:09:14.738 Unknown (0Ch): Supported 00:09:14.738 Unknown (12h): Supported 00:09:14.738 Copy (19h): Supported LBA-Change 00:09:14.738 Unknown (1Dh): Supported LBA-Change 00:09:14.738 00:09:14.738 Error Log 00:09:14.738 ========= 00:09:14.738 00:09:14.738 Arbitration 00:09:14.738 =========== 00:09:14.738 Arbitration Burst: no limit 00:09:14.738 00:09:14.738 Power Management 00:09:14.738 ================ 00:09:14.738 Number of Power States: 1 00:09:14.738 Current Power State: Power State #0 00:09:14.738 Power State #0: 00:09:14.738 Max Power: 25.00 W 00:09:14.738 Non-Operational State: Operational 00:09:14.738 Entry Latency: 16 microseconds 00:09:14.738 Exit Latency: 4 microseconds 00:09:14.738 Relative Read Throughput: 0 00:09:14.738 Relative Read Latency: 0 00:09:14.738 Relative Write Throughput: 0 00:09:14.738 Relative Write Latency: 0 00:09:14.738 Idle Power: Not Reported 00:09:14.738 Active Power: Not Reported 00:09:14.738 Non-Operational Permissive Mode: Not Supported 00:09:14.738 00:09:14.738 Health Information 00:09:14.738 ================== 00:09:14.738 Critical Warnings: 00:09:14.738 Available Spare Space: OK 00:09:14.738 Temperature: OK 00:09:14.738 Device Reliability: OK 00:09:14.738 Read Only: No 00:09:14.738 Volatile Memory Backup: OK 00:09:14.738 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.738 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.738 Available Spare: 0% 00:09:14.738 Available Spare Threshold: 0% 00:09:14.738 Life Percentage Used: 0% 00:09:14.738 Data Units Read: 1220 00:09:14.738 Data Units Written: [2024-11-28 00:03:29.323421] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 75194 terminated unexpected 00:09:14.738 567 00:09:14.738 Host Read Commands: 60723 00:09:14.738 Host Write Commands: 29884 00:09:14.738 Controller Busy Time: 0 minutes 00:09:14.738 Power Cycles: 0 00:09:14.738 Power On Hours: 0 hours 00:09:14.738 Unsafe Shutdowns: 0 00:09:14.738 Unrecoverable Media Errors: 0 00:09:14.738 Lifetime Error Log Entries: 0 00:09:14.738 Warning Temperature Time: 0 minutes 00:09:14.738 Critical Temperature Time: 0 minutes 00:09:14.738 00:09:14.738 Number of Queues 00:09:14.738 ================ 00:09:14.738 Number of I/O Submission Queues: 64 00:09:14.738 Number of I/O Completion Queues: 64 00:09:14.738 00:09:14.738 ZNS Specific Controller Data 00:09:14.738 ============================ 00:09:14.738 Zone Append Size Limit: 0 00:09:14.738 00:09:14.738 00:09:14.738 Active Namespaces 00:09:14.738 ================= 00:09:14.738 Namespace ID:1 00:09:14.738 Error Recovery Timeout: Unlimited 00:09:14.738 Command Set Identifier: NVM (00h) 00:09:14.738 Deallocate: Supported 00:09:14.738 Deallocated/Unwritten Error: Supported 00:09:14.738 Deallocated Read Value: All 0x00 00:09:14.738 Deallocate in Write Zeroes: Not Supported 00:09:14.738 Deallocated Guard Field: 0xFFFF 00:09:14.738 Flush: Supported 00:09:14.738 Reservation: Not Supported 00:09:14.738 Namespace Sharing Capabilities: Private 00:09:14.738 Size (in LBAs): 1310720 (5GiB) 00:09:14.738 Capacity (in LBAs): 1310720 (5GiB) 00:09:14.738 Utilization (in LBAs): 1310720 (5GiB) 00:09:14.738 Thin Provisioning: Not Supported 00:09:14.738 Per-NS Atomic Units: No 00:09:14.738 Maximum Single Source Range Length: 128 00:09:14.738 Maximum Copy Length: 128 00:09:14.738 Maximum Source Range Count: 128 00:09:14.738 NGUID/EUI64 Never Reused: No 00:09:14.738 Namespace Write Protected: No 00:09:14.738 Number of LBA Formats: 8 00:09:14.738 Current LBA Format: LBA Format #04 00:09:14.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:14.738 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:14.738 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:14.738 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:14.738 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:14.738 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:14.738 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:14.738 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:14.738 00:09:14.738 ===================================================== 00:09:14.738 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:14.738 ===================================================== 00:09:14.738 Controller Capabilities/Features 00:09:14.738 ================================ 00:09:14.738 Vendor ID: 1b36 00:09:14.738 Subsystem Vendor ID: 1af4 00:09:14.738 Serial Number: 12342 00:09:14.738 Model Number: QEMU NVMe Ctrl 00:09:14.738 Firmware Version: 8.0.0 00:09:14.738 Recommended Arb Burst: 6 00:09:14.738 IEEE OUI Identifier: 00 54 52 00:09:14.738 Multi-path I/O 00:09:14.738 May have multiple subsystem ports: No 00:09:14.738 May have multiple controllers: No 00:09:14.738 Associated with SR-IOV VF: No 00:09:14.738 Max Data Transfer Size: 524288 00:09:14.738 Max Number of Namespaces: 256 00:09:14.738 Max Number of I/O Queues: 64 00:09:14.738 NVMe Specification Version (VS): 1.4 00:09:14.738 NVMe Specification Version (Identify): 1.4 00:09:14.738 Maximum Queue Entries: 2048 00:09:14.738 Contiguous Queues Required: Yes 00:09:14.738 Arbitration Mechanisms Supported 00:09:14.738 Weighted Round Robin: Not Supported 00:09:14.738 Vendor Specific: Not Supported 00:09:14.738 Reset Timeout: 7500 ms 00:09:14.738 Doorbell Stride: 4 bytes 00:09:14.738 NVM Subsystem Reset: Not Supported 00:09:14.738 Command Sets Supported 00:09:14.738 NVM Command Set: Supported 00:09:14.738 Boot Partition: Not Supported 00:09:14.738 Memory Page Size Minimum: 4096 bytes 00:09:14.738 Memory Page Size Maximum: 65536 bytes 00:09:14.738 Persistent Memory Region: Not Supported 00:09:14.738 Optional Asynchronous Events Supported 00:09:14.738 Namespace Attribute Notices: Supported 00:09:14.738 Firmware Activation Notices: Not Supported 00:09:14.738 ANA Change Notices: Not Supported 00:09:14.738 PLE Aggregate Log Change Notices: Not Supported 00:09:14.738 LBA Status Info Alert Notices: Not Supported 00:09:14.738 EGE Aggregate Log Change Notices: Not Supported 00:09:14.738 Normal NVM Subsystem Shutdown event: Not Supported 00:09:14.738 Zone Descriptor Change Notices: Not Supported 00:09:14.738 Discovery Log Change Notices: Not Supported 00:09:14.738 Controller Attributes 00:09:14.738 128-bit Host Identifier: Not Supported 00:09:14.738 Non-Operational Permissive Mode: Not Supported 00:09:14.738 NVM Sets: Not Supported 00:09:14.738 Read Recovery Levels: Not Supported 00:09:14.738 Endurance Groups: Not Supported 00:09:14.738 Predictable Latency Mode: Not Supported 00:09:14.738 Traffic Based Keep ALive: Not Supported 00:09:14.738 Namespace Granularity: Not Supported 00:09:14.738 SQ Associations: Not Supported 00:09:14.738 UUID List: Not Supported 00:09:14.738 Multi-Domain Subsystem: Not Supported 00:09:14.738 Fixed Capacity Management: Not Supported 00:09:14.738 Variable Capacity Management: Not Supported 00:09:14.738 Delete Endurance Group: Not Supported 00:09:14.738 Delete NVM Set: Not Supported 00:09:14.738 Extended LBA Formats Supported: Supported 00:09:14.739 Flexible Data Placement Supported: Not Supported 00:09:14.739 00:09:14.739 Controller Memory Buffer Support 00:09:14.739 ================================ 00:09:14.739 Supported: No 00:09:14.739 00:09:14.739 Persistent Memory Region Support 00:09:14.739 ================================ 00:09:14.739 Supported: No 00:09:14.739 00:09:14.739 Admin Command Set Attributes 00:09:14.739 ============================ 00:09:14.739 Security Send/Receive: Not Supported 00:09:14.739 Format NVM: Supported 00:09:14.739 Firmware Activate/Download: Not Supported 00:09:14.739 Namespace Management: Supported 00:09:14.739 Device Self-Test: Not Supported 00:09:14.739 Directives: Supported 00:09:14.739 NVMe-MI: Not Supported 00:09:14.739 Virtualization Management: Not Supported 00:09:14.739 Doorbell Buffer Config: Supported 00:09:14.739 Get LBA Status Capability: Not Supported 00:09:14.739 Command & Feature Lockdown Capability: Not Supported 00:09:14.739 Abort Command Limit: 4 00:09:14.739 Async Event Request Limit: 4 00:09:14.739 Number of Firmware Slots: N/A 00:09:14.739 Firmware Slot 1 Read-Only: N/A 00:09:14.739 Firmware Activation Without Reset: N/A 00:09:14.739 Multiple Update Detection Support: N/A 00:09:14.739 Firmware Update Granularity: No Information Provided 00:09:14.739 Per-Namespace SMART Log: Yes 00:09:14.739 Asymmetric Namespace Access Log Page: Not Supported 00:09:14.739 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:14.739 Command Effects Log Page: Supported 00:09:14.739 Get Log Page Extended Data: Supported 00:09:14.739 Telemetry Log Pages: Not Supported 00:09:14.739 Persistent Event Log Pages: Not Supported 00:09:14.739 Supported Log Pages Log Page: May Support 00:09:14.739 Commands Supported & Effects Log Page: Not Supported 00:09:14.739 Feature Identifiers & Effects Log Page:May Support 00:09:14.739 NVMe-MI Commands & Effects Log Page: May Support 00:09:14.739 Data Area 4 for Telemetry Log: Not Supported 00:09:14.739 Error Log Page Entries Supported: 1 00:09:14.739 Keep Alive: Not Supported 00:09:14.739 00:09:14.739 NVM Command Set Attributes 00:09:14.739 ========================== 00:09:14.739 Submission Queue Entry Size 00:09:14.739 Max: 64 00:09:14.739 Min: 64 00:09:14.739 Completion Queue Entry Size 00:09:14.739 Max: 16 00:09:14.739 Min: 16 00:09:14.739 Number of Namespaces: 256 00:09:14.739 Compare Command: Supported 00:09:14.739 Write Uncorrectable Command: Not Supported 00:09:14.739 Dataset Management Command: Supported 00:09:14.739 Write Zeroes Command: Supported 00:09:14.739 Set Features Save Field: Supported 00:09:14.739 Reservations: Not Supported 00:09:14.739 Timestamp: Supported 00:09:14.739 Copy: Supported 00:09:14.739 Volatile Write Cache: Present 00:09:14.739 Atomic Write Unit (Normal): 1 00:09:14.739 Atomic Write Unit (PFail): 1 00:09:14.739 Atomic Compare & Write Unit: 1 00:09:14.739 Fused Compare & Write: Not Supported 00:09:14.739 Scatter-Gather List 00:09:14.739 SGL Command Set: Supported 00:09:14.739 SGL Keyed: Not Supported 00:09:14.739 SGL Bit Bucket Descriptor: Not Supported 00:09:14.739 SGL Metadata Pointer: Not Supported 00:09:14.739 Oversized SGL: Not Supported 00:09:14.739 SGL Metadata Address: Not Supported 00:09:14.739 SGL Offset: Not Supported 00:09:14.739 Transport SGL Data Block: Not Supported 00:09:14.739 Replay Protected Memory Block: Not Supported 00:09:14.739 00:09:14.739 Firmware Slot Information 00:09:14.739 ========================= 00:09:14.739 Active slot: 1 00:09:14.739 Slot 1 Firmware Revision: 1.0 00:09:14.739 00:09:14.739 00:09:14.739 Commands Supported and Effects 00:09:14.739 ============================== 00:09:14.739 Admin Commands 00:09:14.739 -------------- 00:09:14.739 Delete I/O Submission Queue (00h): Supported 00:09:14.739 Create I/O Submission Queue (01h): Supported 00:09:14.739 Get Log Page (02h): Supported 00:09:14.739 Delete I/O Completion Queue (04h): Supported 00:09:14.739 Create I/O Completion Queue (05h): Supported 00:09:14.739 Identify (06h): Supported 00:09:14.739 Abort (08h): Supported 00:09:14.739 Set Features (09h): Supported 00:09:14.739 Get Features (0Ah): Supported 00:09:14.739 Asynchronous Event Request (0Ch): Supported 00:09:14.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:14.739 Directive Send (19h): Supported 00:09:14.739 Directive Receive (1Ah): Supported 00:09:14.739 Virtualization Management (1Ch): Supported 00:09:14.739 Doorbell Buffer Config (7Ch): Supported 00:09:14.739 Format NVM (80h): Supported LBA-Change 00:09:14.739 I/O Commands 00:09:14.739 ------------ 00:09:14.739 Flush (00h): Supported LBA-Change 00:09:14.739 Write (01h): Supported LBA-Change 00:09:14.739 Read (02h): Supported 00:09:14.739 Compare (05h): Supported 00:09:14.739 Write Zeroes (08h): Supported LBA-Change 00:09:14.739 Dataset Management (09h): Supported LBA-Change 00:09:14.739 Unknown (0Ch): Supported 00:09:14.739 Unknown (12h): Supported 00:09:14.739 Copy (19h): Supported LBA-Change 00:09:14.739 Unknown (1Dh): Supported LBA-Change 00:09:14.739 00:09:14.739 Error Log 00:09:14.739 ========= 00:09:14.739 00:09:14.739 Arbitration 00:09:14.739 =========== 00:09:14.739 Arbitration Burst: no limit 00:09:14.739 00:09:14.739 Power Management 00:09:14.739 ================ 00:09:14.739 Number of Power States: 1 00:09:14.739 Current Power State: Power State #0 00:09:14.739 Power State #0: 00:09:14.739 Max Power: 25.00 W 00:09:14.739 Non-Operational State: Operational 00:09:14.739 Entry Latency: 16 microseconds 00:09:14.739 Exit Latency: 4 microseconds 00:09:14.739 Relative Read Throughput: 0 00:09:14.739 Relative Read Latency: 0 00:09:14.739 Relative Write Throughput: 0 00:09:14.739 Relative Write Latency: 0 00:09:14.739 Idle Power: Not Reported 00:09:14.739 Active Power: Not Reported 00:09:14.739 Non-Operational Permissive Mode: Not Supported 00:09:14.739 00:09:14.739 Health Information 00:09:14.739 ================== 00:09:14.739 Critical Warnings: 00:09:14.739 Available Spare Space: OK 00:09:14.739 Temperature: OK 00:09:14.739 Device Reliability: OK 00:09:14.739 Read Only: No 00:09:14.739 Volatile Memory Backup: OK 00:09:14.739 Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:14.739 Available Spare: 0% 00:09:14.739 Available Spare Threshold: 0% 00:09:14.739 Life Percentage Used: 0% 00:09:14.739 Data Units Read: 3843 00:09:14.739 Data Units Written: 1775 00:09:14.739 Host Read Commands: 184216 00:09:14.739 Host Write Commands: 90432 00:09:14.739 Controller Busy Time: 0 minutes 00:09:14.739 Power Cycles: 0 00:09:14.739 Power On Hours: 0 hours 00:09:14.739 Unsafe Shutdowns: 0 00:09:14.739 Unrecoverable Media Errors: 0 00:09:14.739 Lifetime Error Log Entries: 0 00:09:14.739 Warning Temperature Time: 0 minutes 00:09:14.739 Critical Temperature Time: 0 minutes 00:09:14.739 00:09:14.739 Number of Queues 00:09:14.739 ================ 00:09:14.739 Number of I/O Submission Queues: 64 00:09:14.739 Number of I/O Completion Queues: 64 00:09:14.739 00:09:14.739 ZNS Specific Controller Data 00:09:14.739 ============================ 00:09:14.739 Zone Append Size Limit: 0 00:09:14.739 00:09:14.739 00:09:14.739 Active Namespaces 00:09:14.739 ================= 00:09:14.739 Namespace ID:1 00:09:14.739 Error Recovery Timeout: Unlimited 00:09:14.739 Command Set Identifier: NVM (00h) 00:09:14.739 Deallocate: Supported 00:09:14.739 Deallocated/Unwritten Error: Supported 00:09:14.739 Deallocated Read Value: All 0x00 00:09:14.739 Deallocate in Write Zeroes: Not Supported 00:09:14.739 Deallocated Guard Field: 0xFFFF 00:09:14.739 Flush: Supported 00:09:14.739 Reservation: Not Supported 00:09:14.739 Namespace Sharing Capabilities: Private 00:09:14.739 Size (in LBAs): 1048576 (4GiB) 00:09:14.739 Capacity (in LBAs): 1048576 (4GiB) 00:09:14.739 Utilization (in LBAs): 1048576 (4GiB) 00:09:14.739 Thin Provisioning: Not Supported 00:09:14.739 Per-NS Atomic Units: No 00:09:15.002 Maximum Single Source Range Length: 128 00:09:15.002 Maximum Copy Length: 128 00:09:15.002 Maximum Source Range Count: 128 00:09:15.002 NGUID/EUI64 Never Reused: No 00:09:15.002 Namespace Write Protected: No 00:09:15.002 Number of LBA Formats: 8 00:09:15.002 Current LBA Format: LBA Format #04 00:09:15.002 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.002 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.002 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.003 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.003 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.003 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.003 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.003 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.003 00:09:15.003 Namespace ID:2 00:09:15.003 Error Recovery Timeout: Unlimited 00:09:15.003 Command Set Identifier: NVM (00h) 00:09:15.003 Deallocate: Supported 00:09:15.003 Deallocated/Unwritten Error: Supported 00:09:15.003 Deallocated Read Value: All 0x00 00:09:15.003 Deallocate in Write Zeroes: Not Supported 00:09:15.003 Deallocated Guard Field: 0xFFFF 00:09:15.003 Flush: Supported 00:09:15.003 Reservation: Not Supported 00:09:15.003 Namespace Sharing Capabilities: Private 00:09:15.003 Size (in LBAs): 1048576 (4GiB) 00:09:15.003 Capacity (in LBAs): 1048576 (4GiB) 00:09:15.003 Utilization (in LBAs): 1048576 (4GiB) 00:09:15.003 Thin Provisioning: Not Supported 00:09:15.003 Per-NS Atomic Units: No 00:09:15.003 Maximum Single Source Range Length: 128 00:09:15.003 Maximum Copy Length: 128 00:09:15.003 Maximum Source Range Count: 128 00:09:15.003 NGUID/EUI64 Never Reused: No 00:09:15.003 Namespace Write Protected: No 00:09:15.003 Number of LBA Formats: 8 00:09:15.003 Current LBA Format: LBA Format #04 00:09:15.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.003 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.003 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.003 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.003 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.003 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.003 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.003 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.003 00:09:15.003 Namespace ID:3 00:09:15.003 Error Recovery Timeout: Unlimited 00:09:15.003 Command Set Identifier: NVM (00h) 00:09:15.003 Deallocate: Supported 00:09:15.003 Deallocated/Unwritten Error: Supported 00:09:15.003 Deallocated Read Value: All 0x00 00:09:15.003 Deallocate in Write Zeroes: Not Supported 00:09:15.003 Deallocated Guard Field: 0xFFFF 00:09:15.003 Flush: Supported 00:09:15.003 Reservation: Not Supported 00:09:15.003 Namespace Sharing Capabilities: Private 00:09:15.003 Size (in LBAs): 1048576 (4GiB) 00:09:15.003 Capacity (in LBAs): 1048576 (4GiB) 00:09:15.003 Utilization (in LBAs): 1048576 (4GiB) 00:09:15.003 Thin Provisioning: Not Supported 00:09:15.003 Per-NS Atomic Units: No 00:09:15.003 Maximum Single Source Range Length: 128 00:09:15.003 Maximum Copy Length: 128 00:09:15.003 Maximum Source Range Count: 128 00:09:15.003 NGUID/EUI64 Never Reused: No 00:09:15.003 Namespace Write Protected: No 00:09:15.003 Number of LBA Formats: 8 00:09:15.003 Current LBA Format: LBA Format #04 00:09:15.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.003 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.003 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.003 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.003 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.003 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.003 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.003 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.003 00:09:15.003 00:03:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:15.003 00:03:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:15.003 ===================================================== 00:09:15.003 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:15.003 ===================================================== 00:09:15.003 Controller Capabilities/Features 00:09:15.003 ================================ 00:09:15.003 Vendor ID: 1b36 00:09:15.003 Subsystem Vendor ID: 1af4 00:09:15.003 Serial Number: 12340 00:09:15.003 Model Number: QEMU NVMe Ctrl 00:09:15.003 Firmware Version: 8.0.0 00:09:15.003 Recommended Arb Burst: 6 00:09:15.003 IEEE OUI Identifier: 00 54 52 00:09:15.003 Multi-path I/O 00:09:15.003 May have multiple subsystem ports: No 00:09:15.003 May have multiple controllers: No 00:09:15.003 Associated with SR-IOV VF: No 00:09:15.003 Max Data Transfer Size: 524288 00:09:15.003 Max Number of Namespaces: 256 00:09:15.003 Max Number of I/O Queues: 64 00:09:15.003 NVMe Specification Version (VS): 1.4 00:09:15.003 NVMe Specification Version (Identify): 1.4 00:09:15.003 Maximum Queue Entries: 2048 00:09:15.003 Contiguous Queues Required: Yes 00:09:15.003 Arbitration Mechanisms Supported 00:09:15.003 Weighted Round Robin: Not Supported 00:09:15.003 Vendor Specific: Not Supported 00:09:15.003 Reset Timeout: 7500 ms 00:09:15.003 Doorbell Stride: 4 bytes 00:09:15.003 NVM Subsystem Reset: Not Supported 00:09:15.003 Command Sets Supported 00:09:15.003 NVM Command Set: Supported 00:09:15.003 Boot Partition: Not Supported 00:09:15.003 Memory Page Size Minimum: 4096 bytes 00:09:15.003 Memory Page Size Maximum: 65536 bytes 00:09:15.003 Persistent Memory Region: Not Supported 00:09:15.003 Optional Asynchronous Events Supported 00:09:15.003 Namespace Attribute Notices: Supported 00:09:15.003 Firmware Activation Notices: Not Supported 00:09:15.003 ANA Change Notices: Not Supported 00:09:15.003 PLE Aggregate Log Change Notices: Not Supported 00:09:15.003 LBA Status Info Alert Notices: Not Supported 00:09:15.003 EGE Aggregate Log Change Notices: Not Supported 00:09:15.003 Normal NVM Subsystem Shutdown event: Not Supported 00:09:15.003 Zone Descriptor Change Notices: Not Supported 00:09:15.003 Discovery Log Change Notices: Not Supported 00:09:15.003 Controller Attributes 00:09:15.003 128-bit Host Identifier: Not Supported 00:09:15.003 Non-Operational Permissive Mode: Not Supported 00:09:15.003 NVM Sets: Not Supported 00:09:15.003 Read Recovery Levels: Not Supported 00:09:15.003 Endurance Groups: Not Supported 00:09:15.003 Predictable Latency Mode: Not Supported 00:09:15.003 Traffic Based Keep ALive: Not Supported 00:09:15.003 Namespace Granularity: Not Supported 00:09:15.003 SQ Associations: Not Supported 00:09:15.003 UUID List: Not Supported 00:09:15.003 Multi-Domain Subsystem: Not Supported 00:09:15.003 Fixed Capacity Management: Not Supported 00:09:15.003 Variable Capacity Management: Not Supported 00:09:15.003 Delete Endurance Group: Not Supported 00:09:15.003 Delete NVM Set: Not Supported 00:09:15.003 Extended LBA Formats Supported: Supported 00:09:15.003 Flexible Data Placement Supported: Not Supported 00:09:15.003 00:09:15.003 Controller Memory Buffer Support 00:09:15.003 ================================ 00:09:15.003 Supported: No 00:09:15.003 00:09:15.003 Persistent Memory Region Support 00:09:15.003 ================================ 00:09:15.003 Supported: No 00:09:15.003 00:09:15.003 Admin Command Set Attributes 00:09:15.003 ============================ 00:09:15.003 Security Send/Receive: Not Supported 00:09:15.003 Format NVM: Supported 00:09:15.003 Firmware Activate/Download: Not Supported 00:09:15.003 Namespace Management: Supported 00:09:15.003 Device Self-Test: Not Supported 00:09:15.003 Directives: Supported 00:09:15.003 NVMe-MI: Not Supported 00:09:15.003 Virtualization Management: Not Supported 00:09:15.003 Doorbell Buffer Config: Supported 00:09:15.003 Get LBA Status Capability: Not Supported 00:09:15.003 Command & Feature Lockdown Capability: Not Supported 00:09:15.003 Abort Command Limit: 4 00:09:15.003 Async Event Request Limit: 4 00:09:15.003 Number of Firmware Slots: N/A 00:09:15.003 Firmware Slot 1 Read-Only: N/A 00:09:15.003 Firmware Activation Without Reset: N/A 00:09:15.003 Multiple Update Detection Support: N/A 00:09:15.003 Firmware Update Granularity: No Information Provided 00:09:15.003 Per-Namespace SMART Log: Yes 00:09:15.003 Asymmetric Namespace Access Log Page: Not Supported 00:09:15.003 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:15.003 Command Effects Log Page: Supported 00:09:15.003 Get Log Page Extended Data: Supported 00:09:15.003 Telemetry Log Pages: Not Supported 00:09:15.003 Persistent Event Log Pages: Not Supported 00:09:15.003 Supported Log Pages Log Page: May Support 00:09:15.003 Commands Supported & Effects Log Page: Not Supported 00:09:15.003 Feature Identifiers & Effects Log Page:May Support 00:09:15.003 NVMe-MI Commands & Effects Log Page: May Support 00:09:15.003 Data Area 4 for Telemetry Log: Not Supported 00:09:15.003 Error Log Page Entries Supported: 1 00:09:15.003 Keep Alive: Not Supported 00:09:15.003 00:09:15.003 NVM Command Set Attributes 00:09:15.003 ========================== 00:09:15.003 Submission Queue Entry Size 00:09:15.003 Max: 64 00:09:15.003 Min: 64 00:09:15.003 Completion Queue Entry Size 00:09:15.003 Max: 16 00:09:15.003 Min: 16 00:09:15.004 Number of Namespaces: 256 00:09:15.004 Compare Command: Supported 00:09:15.004 Write Uncorrectable Command: Not Supported 00:09:15.004 Dataset Management Command: Supported 00:09:15.004 Write Zeroes Command: Supported 00:09:15.004 Set Features Save Field: Supported 00:09:15.004 Reservations: Not Supported 00:09:15.004 Timestamp: Supported 00:09:15.004 Copy: Supported 00:09:15.004 Volatile Write Cache: Present 00:09:15.004 Atomic Write Unit (Normal): 1 00:09:15.004 Atomic Write Unit (PFail): 1 00:09:15.004 Atomic Compare & Write Unit: 1 00:09:15.004 Fused Compare & Write: Not Supported 00:09:15.004 Scatter-Gather List 00:09:15.004 SGL Command Set: Supported 00:09:15.004 SGL Keyed: Not Supported 00:09:15.004 SGL Bit Bucket Descriptor: Not Supported 00:09:15.004 SGL Metadata Pointer: Not Supported 00:09:15.004 Oversized SGL: Not Supported 00:09:15.004 SGL Metadata Address: Not Supported 00:09:15.004 SGL Offset: Not Supported 00:09:15.004 Transport SGL Data Block: Not Supported 00:09:15.004 Replay Protected Memory Block: Not Supported 00:09:15.004 00:09:15.004 Firmware Slot Information 00:09:15.004 ========================= 00:09:15.004 Active slot: 1 00:09:15.004 Slot 1 Firmware Revision: 1.0 00:09:15.004 00:09:15.004 00:09:15.004 Commands Supported and Effects 00:09:15.004 ============================== 00:09:15.004 Admin Commands 00:09:15.004 -------------- 00:09:15.004 Delete I/O Submission Queue (00h): Supported 00:09:15.004 Create I/O Submission Queue (01h): Supported 00:09:15.004 Get Log Page (02h): Supported 00:09:15.004 Delete I/O Completion Queue (04h): Supported 00:09:15.004 Create I/O Completion Queue (05h): Supported 00:09:15.004 Identify (06h): Supported 00:09:15.004 Abort (08h): Supported 00:09:15.004 Set Features (09h): Supported 00:09:15.004 Get Features (0Ah): Supported 00:09:15.004 Asynchronous Event Request (0Ch): Supported 00:09:15.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:15.004 Directive Send (19h): Supported 00:09:15.004 Directive Receive (1Ah): Supported 00:09:15.004 Virtualization Management (1Ch): Supported 00:09:15.004 Doorbell Buffer Config (7Ch): Supported 00:09:15.004 Format NVM (80h): Supported LBA-Change 00:09:15.004 I/O Commands 00:09:15.004 ------------ 00:09:15.004 Flush (00h): Supported LBA-Change 00:09:15.004 Write (01h): Supported LBA-Change 00:09:15.004 Read (02h): Supported 00:09:15.004 Compare (05h): Supported 00:09:15.004 Write Zeroes (08h): Supported LBA-Change 00:09:15.004 Dataset Management (09h): Supported LBA-Change 00:09:15.004 Unknown (0Ch): Supported 00:09:15.004 Unknown (12h): Supported 00:09:15.004 Copy (19h): Supported LBA-Change 00:09:15.004 Unknown (1Dh): Supported LBA-Change 00:09:15.004 00:09:15.004 Error Log 00:09:15.004 ========= 00:09:15.004 00:09:15.004 Arbitration 00:09:15.004 =========== 00:09:15.004 Arbitration Burst: no limit 00:09:15.004 00:09:15.004 Power Management 00:09:15.004 ================ 00:09:15.004 Number of Power States: 1 00:09:15.004 Current Power State: Power State #0 00:09:15.004 Power State #0: 00:09:15.004 Max Power: 25.00 W 00:09:15.004 Non-Operational State: Operational 00:09:15.004 Entry Latency: 16 microseconds 00:09:15.004 Exit Latency: 4 microseconds 00:09:15.004 Relative Read Throughput: 0 00:09:15.004 Relative Read Latency: 0 00:09:15.004 Relative Write Throughput: 0 00:09:15.004 Relative Write Latency: 0 00:09:15.004 Idle Power: Not Reported 00:09:15.004 Active Power: Not Reported 00:09:15.004 Non-Operational Permissive Mode: Not Supported 00:09:15.004 00:09:15.004 Health Information 00:09:15.004 ================== 00:09:15.004 Critical Warnings: 00:09:15.004 Available Spare Space: OK 00:09:15.004 Temperature: OK 00:09:15.004 Device Reliability: OK 00:09:15.004 Read Only: No 00:09:15.004 Volatile Memory Backup: OK 00:09:15.004 Current Temperature: 323 Kelvin (50 Celsius) 00:09:15.004 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:15.004 Available Spare: 0% 00:09:15.004 Available Spare Threshold: 0% 00:09:15.004 Life Percentage Used: 0% 00:09:15.004 Data Units Read: 1809 00:09:15.004 Data Units Written: 835 00:09:15.004 Host Read Commands: 88590 00:09:15.004 Host Write Commands: 43979 00:09:15.004 Controller Busy Time: 0 minutes 00:09:15.004 Power Cycles: 0 00:09:15.004 Power On Hours: 0 hours 00:09:15.004 Unsafe Shutdowns: 0 00:09:15.004 Unrecoverable Media Errors: 0 00:09:15.004 Lifetime Error Log Entries: 0 00:09:15.004 Warning Temperature Time: 0 minutes 00:09:15.004 Critical Temperature Time: 0 minutes 00:09:15.004 00:09:15.004 Number of Queues 00:09:15.004 ================ 00:09:15.004 Number of I/O Submission Queues: 64 00:09:15.004 Number of I/O Completion Queues: 64 00:09:15.004 00:09:15.004 ZNS Specific Controller Data 00:09:15.004 ============================ 00:09:15.004 Zone Append Size Limit: 0 00:09:15.004 00:09:15.004 00:09:15.004 Active Namespaces 00:09:15.004 ================= 00:09:15.004 Namespace ID:1 00:09:15.004 Error Recovery Timeout: Unlimited 00:09:15.004 Command Set Identifier: NVM (00h) 00:09:15.004 Deallocate: Supported 00:09:15.004 Deallocated/Unwritten Error: Supported 00:09:15.004 Deallocated Read Value: All 0x00 00:09:15.004 Deallocate in Write Zeroes: Not Supported 00:09:15.004 Deallocated Guard Field: 0xFFFF 00:09:15.004 Flush: Supported 00:09:15.004 Reservation: Not Supported 00:09:15.004 Metadata Transferred as: Separate Metadata Buffer 00:09:15.004 Namespace Sharing Capabilities: Private 00:09:15.004 Size (in LBAs): 1548666 (5GiB) 00:09:15.004 Capacity (in LBAs): 1548666 (5GiB) 00:09:15.004 Utilization (in LBAs): 1548666 (5GiB) 00:09:15.004 Thin Provisioning: Not Supported 00:09:15.004 Per-NS Atomic Units: No 00:09:15.004 Maximum Single Source Range Length: 128 00:09:15.004 Maximum Copy Length: 128 00:09:15.004 Maximum Source Range Count: 128 00:09:15.004 NGUID/EUI64 Never Reused: No 00:09:15.004 Namespace Write Protected: No 00:09:15.004 Number of LBA Formats: 8 00:09:15.004 Current LBA Format: LBA Format #07 00:09:15.004 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.004 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.004 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.004 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.004 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.004 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.004 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.004 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.004 00:09:15.004 00:03:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:15.004 00:03:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:15.265 ===================================================== 00:09:15.266 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:15.266 ===================================================== 00:09:15.266 Controller Capabilities/Features 00:09:15.266 ================================ 00:09:15.266 Vendor ID: 1b36 00:09:15.266 Subsystem Vendor ID: 1af4 00:09:15.266 Serial Number: 12341 00:09:15.266 Model Number: QEMU NVMe Ctrl 00:09:15.266 Firmware Version: 8.0.0 00:09:15.266 Recommended Arb Burst: 6 00:09:15.266 IEEE OUI Identifier: 00 54 52 00:09:15.266 Multi-path I/O 00:09:15.266 May have multiple subsystem ports: No 00:09:15.266 May have multiple controllers: No 00:09:15.266 Associated with SR-IOV VF: No 00:09:15.266 Max Data Transfer Size: 524288 00:09:15.266 Max Number of Namespaces: 256 00:09:15.266 Max Number of I/O Queues: 64 00:09:15.266 NVMe Specification Version (VS): 1.4 00:09:15.266 NVMe Specification Version (Identify): 1.4 00:09:15.266 Maximum Queue Entries: 2048 00:09:15.266 Contiguous Queues Required: Yes 00:09:15.266 Arbitration Mechanisms Supported 00:09:15.266 Weighted Round Robin: Not Supported 00:09:15.266 Vendor Specific: Not Supported 00:09:15.266 Reset Timeout: 7500 ms 00:09:15.266 Doorbell Stride: 4 bytes 00:09:15.266 NVM Subsystem Reset: Not Supported 00:09:15.266 Command Sets Supported 00:09:15.266 NVM Command Set: Supported 00:09:15.266 Boot Partition: Not Supported 00:09:15.266 Memory Page Size Minimum: 4096 bytes 00:09:15.266 Memory Page Size Maximum: 65536 bytes 00:09:15.266 Persistent Memory Region: Not Supported 00:09:15.266 Optional Asynchronous Events Supported 00:09:15.266 Namespace Attribute Notices: Supported 00:09:15.266 Firmware Activation Notices: Not Supported 00:09:15.266 ANA Change Notices: Not Supported 00:09:15.266 PLE Aggregate Log Change Notices: Not Supported 00:09:15.266 LBA Status Info Alert Notices: Not Supported 00:09:15.266 EGE Aggregate Log Change Notices: Not Supported 00:09:15.266 Normal NVM Subsystem Shutdown event: Not Supported 00:09:15.266 Zone Descriptor Change Notices: Not Supported 00:09:15.266 Discovery Log Change Notices: Not Supported 00:09:15.266 Controller Attributes 00:09:15.266 128-bit Host Identifier: Not Supported 00:09:15.266 Non-Operational Permissive Mode: Not Supported 00:09:15.266 NVM Sets: Not Supported 00:09:15.266 Read Recovery Levels: Not Supported 00:09:15.266 Endurance Groups: Not Supported 00:09:15.266 Predictable Latency Mode: Not Supported 00:09:15.266 Traffic Based Keep ALive: Not Supported 00:09:15.266 Namespace Granularity: Not Supported 00:09:15.266 SQ Associations: Not Supported 00:09:15.266 UUID List: Not Supported 00:09:15.266 Multi-Domain Subsystem: Not Supported 00:09:15.266 Fixed Capacity Management: Not Supported 00:09:15.266 Variable Capacity Management: Not Supported 00:09:15.266 Delete Endurance Group: Not Supported 00:09:15.266 Delete NVM Set: Not Supported 00:09:15.266 Extended LBA Formats Supported: Supported 00:09:15.266 Flexible Data Placement Supported: Not Supported 00:09:15.266 00:09:15.266 Controller Memory Buffer Support 00:09:15.266 ================================ 00:09:15.266 Supported: No 00:09:15.266 00:09:15.266 Persistent Memory Region Support 00:09:15.266 ================================ 00:09:15.266 Supported: No 00:09:15.266 00:09:15.266 Admin Command Set Attributes 00:09:15.266 ============================ 00:09:15.266 Security Send/Receive: Not Supported 00:09:15.266 Format NVM: Supported 00:09:15.266 Firmware Activate/Download: Not Supported 00:09:15.266 Namespace Management: Supported 00:09:15.266 Device Self-Test: Not Supported 00:09:15.266 Directives: Supported 00:09:15.266 NVMe-MI: Not Supported 00:09:15.266 Virtualization Management: Not Supported 00:09:15.266 Doorbell Buffer Config: Supported 00:09:15.266 Get LBA Status Capability: Not Supported 00:09:15.266 Command & Feature Lockdown Capability: Not Supported 00:09:15.266 Abort Command Limit: 4 00:09:15.266 Async Event Request Limit: 4 00:09:15.266 Number of Firmware Slots: N/A 00:09:15.266 Firmware Slot 1 Read-Only: N/A 00:09:15.266 Firmware Activation Without Reset: N/A 00:09:15.266 Multiple Update Detection Support: N/A 00:09:15.266 Firmware Update Granularity: No Information Provided 00:09:15.266 Per-Namespace SMART Log: Yes 00:09:15.266 Asymmetric Namespace Access Log Page: Not Supported 00:09:15.266 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:15.266 Command Effects Log Page: Supported 00:09:15.266 Get Log Page Extended Data: Supported 00:09:15.266 Telemetry Log Pages: Not Supported 00:09:15.266 Persistent Event Log Pages: Not Supported 00:09:15.266 Supported Log Pages Log Page: May Support 00:09:15.266 Commands Supported & Effects Log Page: Not Supported 00:09:15.266 Feature Identifiers & Effects Log Page:May Support 00:09:15.266 NVMe-MI Commands & Effects Log Page: May Support 00:09:15.266 Data Area 4 for Telemetry Log: Not Supported 00:09:15.266 Error Log Page Entries Supported: 1 00:09:15.266 Keep Alive: Not Supported 00:09:15.266 00:09:15.266 NVM Command Set Attributes 00:09:15.266 ========================== 00:09:15.266 Submission Queue Entry Size 00:09:15.266 Max: 64 00:09:15.266 Min: 64 00:09:15.266 Completion Queue Entry Size 00:09:15.266 Max: 16 00:09:15.266 Min: 16 00:09:15.266 Number of Namespaces: 256 00:09:15.266 Compare Command: Supported 00:09:15.266 Write Uncorrectable Command: Not Supported 00:09:15.266 Dataset Management Command: Supported 00:09:15.266 Write Zeroes Command: Supported 00:09:15.266 Set Features Save Field: Supported 00:09:15.266 Reservations: Not Supported 00:09:15.266 Timestamp: Supported 00:09:15.266 Copy: Supported 00:09:15.266 Volatile Write Cache: Present 00:09:15.266 Atomic Write Unit (Normal): 1 00:09:15.266 Atomic Write Unit (PFail): 1 00:09:15.266 Atomic Compare & Write Unit: 1 00:09:15.266 Fused Compare & Write: Not Supported 00:09:15.266 Scatter-Gather List 00:09:15.266 SGL Command Set: Supported 00:09:15.266 SGL Keyed: Not Supported 00:09:15.266 SGL Bit Bucket Descriptor: Not Supported 00:09:15.266 SGL Metadata Pointer: Not Supported 00:09:15.266 Oversized SGL: Not Supported 00:09:15.266 SGL Metadata Address: Not Supported 00:09:15.266 SGL Offset: Not Supported 00:09:15.266 Transport SGL Data Block: Not Supported 00:09:15.266 Replay Protected Memory Block: Not Supported 00:09:15.266 00:09:15.266 Firmware Slot Information 00:09:15.266 ========================= 00:09:15.266 Active slot: 1 00:09:15.266 Slot 1 Firmware Revision: 1.0 00:09:15.266 00:09:15.266 00:09:15.266 Commands Supported and Effects 00:09:15.266 ============================== 00:09:15.266 Admin Commands 00:09:15.266 -------------- 00:09:15.266 Delete I/O Submission Queue (00h): Supported 00:09:15.266 Create I/O Submission Queue (01h): Supported 00:09:15.266 Get Log Page (02h): Supported 00:09:15.266 Delete I/O Completion Queue (04h): Supported 00:09:15.266 Create I/O Completion Queue (05h): Supported 00:09:15.266 Identify (06h): Supported 00:09:15.266 Abort (08h): Supported 00:09:15.266 Set Features (09h): Supported 00:09:15.266 Get Features (0Ah): Supported 00:09:15.266 Asynchronous Event Request (0Ch): Supported 00:09:15.266 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:15.266 Directive Send (19h): Supported 00:09:15.266 Directive Receive (1Ah): Supported 00:09:15.266 Virtualization Management (1Ch): Supported 00:09:15.266 Doorbell Buffer Config (7Ch): Supported 00:09:15.266 Format NVM (80h): Supported LBA-Change 00:09:15.266 I/O Commands 00:09:15.266 ------------ 00:09:15.266 Flush (00h): Supported LBA-Change 00:09:15.266 Write (01h): Supported LBA-Change 00:09:15.266 Read (02h): Supported 00:09:15.266 Compare (05h): Supported 00:09:15.266 Write Zeroes (08h): Supported LBA-Change 00:09:15.266 Dataset Management (09h): Supported LBA-Change 00:09:15.266 Unknown (0Ch): Supported 00:09:15.266 Unknown (12h): Supported 00:09:15.266 Copy (19h): Supported LBA-Change 00:09:15.266 Unknown (1Dh): Supported LBA-Change 00:09:15.266 00:09:15.266 Error Log 00:09:15.266 ========= 00:09:15.266 00:09:15.266 Arbitration 00:09:15.266 =========== 00:09:15.266 Arbitration Burst: no limit 00:09:15.266 00:09:15.266 Power Management 00:09:15.266 ================ 00:09:15.266 Number of Power States: 1 00:09:15.266 Current Power State: Power State #0 00:09:15.266 Power State #0: 00:09:15.266 Max Power: 25.00 W 00:09:15.266 Non-Operational State: Operational 00:09:15.266 Entry Latency: 16 microseconds 00:09:15.266 Exit Latency: 4 microseconds 00:09:15.266 Relative Read Throughput: 0 00:09:15.266 Relative Read Latency: 0 00:09:15.266 Relative Write Throughput: 0 00:09:15.266 Relative Write Latency: 0 00:09:15.266 Idle Power: Not Reported 00:09:15.266 Active Power: Not Reported 00:09:15.266 Non-Operational Permissive Mode: Not Supported 00:09:15.266 00:09:15.266 Health Information 00:09:15.266 ================== 00:09:15.266 Critical Warnings: 00:09:15.266 Available Spare Space: OK 00:09:15.266 Temperature: OK 00:09:15.266 Device Reliability: OK 00:09:15.266 Read Only: No 00:09:15.266 Volatile Memory Backup: OK 00:09:15.266 Current Temperature: 323 Kelvin (50 Celsius) 00:09:15.266 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:15.266 Available Spare: 0% 00:09:15.266 Available Spare Threshold: 0% 00:09:15.266 Life Percentage Used: 0% 00:09:15.266 Data Units Read: 1220 00:09:15.266 Data Units Written: 567 00:09:15.266 Host Read Commands: 60723 00:09:15.266 Host Write Commands: 29884 00:09:15.266 Controller Busy Time: 0 minutes 00:09:15.266 Power Cycles: 0 00:09:15.266 Power On Hours: 0 hours 00:09:15.266 Unsafe Shutdowns: 0 00:09:15.266 Unrecoverable Media Errors: 0 00:09:15.266 Lifetime Error Log Entries: 0 00:09:15.266 Warning Temperature Time: 0 minutes 00:09:15.266 Critical Temperature Time: 0 minutes 00:09:15.266 00:09:15.266 Number of Queues 00:09:15.266 ================ 00:09:15.266 Number of I/O Submission Queues: 64 00:09:15.266 Number of I/O Completion Queues: 64 00:09:15.266 00:09:15.266 ZNS Specific Controller Data 00:09:15.266 ============================ 00:09:15.266 Zone Append Size Limit: 0 00:09:15.266 00:09:15.266 00:09:15.266 Active Namespaces 00:09:15.266 ================= 00:09:15.266 Namespace ID:1 00:09:15.266 Error Recovery Timeout: Unlimited 00:09:15.266 Command Set Identifier: NVM (00h) 00:09:15.266 Deallocate: Supported 00:09:15.266 Deallocated/Unwritten Error: Supported 00:09:15.266 Deallocated Read Value: All 0x00 00:09:15.267 Deallocate in Write Zeroes: Not Supported 00:09:15.267 Deallocated Guard Field: 0xFFFF 00:09:15.267 Flush: Supported 00:09:15.267 Reservation: Not Supported 00:09:15.267 Namespace Sharing Capabilities: Private 00:09:15.267 Size (in LBAs): 1310720 (5GiB) 00:09:15.267 Capacity (in LBAs): 1310720 (5GiB) 00:09:15.267 Utilization (in LBAs): 1310720 (5GiB) 00:09:15.267 Thin Provisioning: Not Supported 00:09:15.267 Per-NS Atomic Units: No 00:09:15.267 Maximum Single Source Range Length: 128 00:09:15.267 Maximum Copy Length: 128 00:09:15.267 Maximum Source Range Count: 128 00:09:15.267 NGUID/EUI64 Never Reused: No 00:09:15.267 Namespace Write Protected: No 00:09:15.267 Number of LBA Formats: 8 00:09:15.267 Current LBA Format: LBA Format #04 00:09:15.267 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.267 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.267 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.267 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.267 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.267 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.267 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.267 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.267 00:09:15.267 00:03:29 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:15.267 00:03:29 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:15.529 ===================================================== 00:09:15.529 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:15.529 ===================================================== 00:09:15.529 Controller Capabilities/Features 00:09:15.529 ================================ 00:09:15.529 Vendor ID: 1b36 00:09:15.529 Subsystem Vendor ID: 1af4 00:09:15.529 Serial Number: 12342 00:09:15.529 Model Number: QEMU NVMe Ctrl 00:09:15.529 Firmware Version: 8.0.0 00:09:15.529 Recommended Arb Burst: 6 00:09:15.529 IEEE OUI Identifier: 00 54 52 00:09:15.529 Multi-path I/O 00:09:15.529 May have multiple subsystem ports: No 00:09:15.529 May have multiple controllers: No 00:09:15.529 Associated with SR-IOV VF: No 00:09:15.529 Max Data Transfer Size: 524288 00:09:15.529 Max Number of Namespaces: 256 00:09:15.529 Max Number of I/O Queues: 64 00:09:15.529 NVMe Specification Version (VS): 1.4 00:09:15.529 NVMe Specification Version (Identify): 1.4 00:09:15.529 Maximum Queue Entries: 2048 00:09:15.529 Contiguous Queues Required: Yes 00:09:15.529 Arbitration Mechanisms Supported 00:09:15.529 Weighted Round Robin: Not Supported 00:09:15.529 Vendor Specific: Not Supported 00:09:15.529 Reset Timeout: 7500 ms 00:09:15.529 Doorbell Stride: 4 bytes 00:09:15.529 NVM Subsystem Reset: Not Supported 00:09:15.529 Command Sets Supported 00:09:15.529 NVM Command Set: Supported 00:09:15.529 Boot Partition: Not Supported 00:09:15.529 Memory Page Size Minimum: 4096 bytes 00:09:15.529 Memory Page Size Maximum: 65536 bytes 00:09:15.529 Persistent Memory Region: Not Supported 00:09:15.529 Optional Asynchronous Events Supported 00:09:15.529 Namespace Attribute Notices: Supported 00:09:15.529 Firmware Activation Notices: Not Supported 00:09:15.529 ANA Change Notices: Not Supported 00:09:15.529 PLE Aggregate Log Change Notices: Not Supported 00:09:15.529 LBA Status Info Alert Notices: Not Supported 00:09:15.529 EGE Aggregate Log Change Notices: Not Supported 00:09:15.529 Normal NVM Subsystem Shutdown event: Not Supported 00:09:15.529 Zone Descriptor Change Notices: Not Supported 00:09:15.529 Discovery Log Change Notices: Not Supported 00:09:15.529 Controller Attributes 00:09:15.529 128-bit Host Identifier: Not Supported 00:09:15.529 Non-Operational Permissive Mode: Not Supported 00:09:15.529 NVM Sets: Not Supported 00:09:15.529 Read Recovery Levels: Not Supported 00:09:15.529 Endurance Groups: Not Supported 00:09:15.529 Predictable Latency Mode: Not Supported 00:09:15.530 Traffic Based Keep ALive: Not Supported 00:09:15.530 Namespace Granularity: Not Supported 00:09:15.530 SQ Associations: Not Supported 00:09:15.530 UUID List: Not Supported 00:09:15.530 Multi-Domain Subsystem: Not Supported 00:09:15.530 Fixed Capacity Management: Not Supported 00:09:15.530 Variable Capacity Management: Not Supported 00:09:15.530 Delete Endurance Group: Not Supported 00:09:15.530 Delete NVM Set: Not Supported 00:09:15.530 Extended LBA Formats Supported: Supported 00:09:15.530 Flexible Data Placement Supported: Not Supported 00:09:15.530 00:09:15.530 Controller Memory Buffer Support 00:09:15.530 ================================ 00:09:15.530 Supported: No 00:09:15.530 00:09:15.530 Persistent Memory Region Support 00:09:15.530 ================================ 00:09:15.530 Supported: No 00:09:15.530 00:09:15.530 Admin Command Set Attributes 00:09:15.530 ============================ 00:09:15.530 Security Send/Receive: Not Supported 00:09:15.530 Format NVM: Supported 00:09:15.530 Firmware Activate/Download: Not Supported 00:09:15.530 Namespace Management: Supported 00:09:15.530 Device Self-Test: Not Supported 00:09:15.530 Directives: Supported 00:09:15.530 NVMe-MI: Not Supported 00:09:15.530 Virtualization Management: Not Supported 00:09:15.530 Doorbell Buffer Config: Supported 00:09:15.530 Get LBA Status Capability: Not Supported 00:09:15.530 Command & Feature Lockdown Capability: Not Supported 00:09:15.530 Abort Command Limit: 4 00:09:15.530 Async Event Request Limit: 4 00:09:15.530 Number of Firmware Slots: N/A 00:09:15.530 Firmware Slot 1 Read-Only: N/A 00:09:15.530 Firmware Activation Without Reset: N/A 00:09:15.530 Multiple Update Detection Support: N/A 00:09:15.530 Firmware Update Granularity: No Information Provided 00:09:15.530 Per-Namespace SMART Log: Yes 00:09:15.530 Asymmetric Namespace Access Log Page: Not Supported 00:09:15.530 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:15.530 Command Effects Log Page: Supported 00:09:15.530 Get Log Page Extended Data: Supported 00:09:15.530 Telemetry Log Pages: Not Supported 00:09:15.530 Persistent Event Log Pages: Not Supported 00:09:15.530 Supported Log Pages Log Page: May Support 00:09:15.530 Commands Supported & Effects Log Page: Not Supported 00:09:15.530 Feature Identifiers & Effects Log Page:May Support 00:09:15.530 NVMe-MI Commands & Effects Log Page: May Support 00:09:15.530 Data Area 4 for Telemetry Log: Not Supported 00:09:15.530 Error Log Page Entries Supported: 1 00:09:15.530 Keep Alive: Not Supported 00:09:15.530 00:09:15.530 NVM Command Set Attributes 00:09:15.530 ========================== 00:09:15.530 Submission Queue Entry Size 00:09:15.530 Max: 64 00:09:15.530 Min: 64 00:09:15.530 Completion Queue Entry Size 00:09:15.530 Max: 16 00:09:15.530 Min: 16 00:09:15.530 Number of Namespaces: 256 00:09:15.530 Compare Command: Supported 00:09:15.530 Write Uncorrectable Command: Not Supported 00:09:15.530 Dataset Management Command: Supported 00:09:15.530 Write Zeroes Command: Supported 00:09:15.530 Set Features Save Field: Supported 00:09:15.530 Reservations: Not Supported 00:09:15.530 Timestamp: Supported 00:09:15.530 Copy: Supported 00:09:15.530 Volatile Write Cache: Present 00:09:15.530 Atomic Write Unit (Normal): 1 00:09:15.530 Atomic Write Unit (PFail): 1 00:09:15.530 Atomic Compare & Write Unit: 1 00:09:15.530 Fused Compare & Write: Not Supported 00:09:15.530 Scatter-Gather List 00:09:15.530 SGL Command Set: Supported 00:09:15.530 SGL Keyed: Not Supported 00:09:15.530 SGL Bit Bucket Descriptor: Not Supported 00:09:15.530 SGL Metadata Pointer: Not Supported 00:09:15.530 Oversized SGL: Not Supported 00:09:15.530 SGL Metadata Address: Not Supported 00:09:15.530 SGL Offset: Not Supported 00:09:15.530 Transport SGL Data Block: Not Supported 00:09:15.530 Replay Protected Memory Block: Not Supported 00:09:15.530 00:09:15.530 Firmware Slot Information 00:09:15.530 ========================= 00:09:15.530 Active slot: 1 00:09:15.530 Slot 1 Firmware Revision: 1.0 00:09:15.530 00:09:15.530 00:09:15.530 Commands Supported and Effects 00:09:15.530 ============================== 00:09:15.530 Admin Commands 00:09:15.530 -------------- 00:09:15.530 Delete I/O Submission Queue (00h): Supported 00:09:15.530 Create I/O Submission Queue (01h): Supported 00:09:15.530 Get Log Page (02h): Supported 00:09:15.530 Delete I/O Completion Queue (04h): Supported 00:09:15.530 Create I/O Completion Queue (05h): Supported 00:09:15.530 Identify (06h): Supported 00:09:15.530 Abort (08h): Supported 00:09:15.530 Set Features (09h): Supported 00:09:15.530 Get Features (0Ah): Supported 00:09:15.530 Asynchronous Event Request (0Ch): Supported 00:09:15.530 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:15.530 Directive Send (19h): Supported 00:09:15.530 Directive Receive (1Ah): Supported 00:09:15.530 Virtualization Management (1Ch): Supported 00:09:15.530 Doorbell Buffer Config (7Ch): Supported 00:09:15.530 Format NVM (80h): Supported LBA-Change 00:09:15.530 I/O Commands 00:09:15.530 ------------ 00:09:15.530 Flush (00h): Supported LBA-Change 00:09:15.530 Write (01h): Supported LBA-Change 00:09:15.530 Read (02h): Supported 00:09:15.530 Compare (05h): Supported 00:09:15.530 Write Zeroes (08h): Supported LBA-Change 00:09:15.530 Dataset Management (09h): Supported LBA-Change 00:09:15.530 Unknown (0Ch): Supported 00:09:15.530 Unknown (12h): Supported 00:09:15.530 Copy (19h): Supported LBA-Change 00:09:15.530 Unknown (1Dh): Supported LBA-Change 00:09:15.530 00:09:15.530 Error Log 00:09:15.530 ========= 00:09:15.530 00:09:15.530 Arbitration 00:09:15.530 =========== 00:09:15.530 Arbitration Burst: no limit 00:09:15.530 00:09:15.530 Power Management 00:09:15.530 ================ 00:09:15.530 Number of Power States: 1 00:09:15.530 Current Power State: Power State #0 00:09:15.530 Power State #0: 00:09:15.530 Max Power: 25.00 W 00:09:15.530 Non-Operational State: Operational 00:09:15.530 Entry Latency: 16 microseconds 00:09:15.530 Exit Latency: 4 microseconds 00:09:15.530 Relative Read Throughput: 0 00:09:15.530 Relative Read Latency: 0 00:09:15.530 Relative Write Throughput: 0 00:09:15.530 Relative Write Latency: 0 00:09:15.530 Idle Power: Not Reported 00:09:15.530 Active Power: Not Reported 00:09:15.530 Non-Operational Permissive Mode: Not Supported 00:09:15.530 00:09:15.530 Health Information 00:09:15.530 ================== 00:09:15.530 Critical Warnings: 00:09:15.530 Available Spare Space: OK 00:09:15.530 Temperature: OK 00:09:15.530 Device Reliability: OK 00:09:15.530 Read Only: No 00:09:15.530 Volatile Memory Backup: OK 00:09:15.530 Current Temperature: 323 Kelvin (50 Celsius) 00:09:15.530 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:15.530 Available Spare: 0% 00:09:15.530 Available Spare Threshold: 0% 00:09:15.530 Life Percentage Used: 0% 00:09:15.530 Data Units Read: 3843 00:09:15.530 Data Units Written: 1775 00:09:15.530 Host Read Commands: 184216 00:09:15.530 Host Write Commands: 90432 00:09:15.530 Controller Busy Time: 0 minutes 00:09:15.530 Power Cycles: 0 00:09:15.530 Power On Hours: 0 hours 00:09:15.530 Unsafe Shutdowns: 0 00:09:15.530 Unrecoverable Media Errors: 0 00:09:15.530 Lifetime Error Log Entries: 0 00:09:15.530 Warning Temperature Time: 0 minutes 00:09:15.530 Critical Temperature Time: 0 minutes 00:09:15.530 00:09:15.530 Number of Queues 00:09:15.530 ================ 00:09:15.530 Number of I/O Submission Queues: 64 00:09:15.530 Number of I/O Completion Queues: 64 00:09:15.530 00:09:15.530 ZNS Specific Controller Data 00:09:15.530 ============================ 00:09:15.530 Zone Append Size Limit: 0 00:09:15.530 00:09:15.530 00:09:15.530 Active Namespaces 00:09:15.530 ================= 00:09:15.530 Namespace ID:1 00:09:15.530 Error Recovery Timeout: Unlimited 00:09:15.530 Command Set Identifier: NVM (00h) 00:09:15.530 Deallocate: Supported 00:09:15.530 Deallocated/Unwritten Error: Supported 00:09:15.530 Deallocated Read Value: All 0x00 00:09:15.530 Deallocate in Write Zeroes: Not Supported 00:09:15.530 Deallocated Guard Field: 0xFFFF 00:09:15.530 Flush: Supported 00:09:15.530 Reservation: Not Supported 00:09:15.530 Namespace Sharing Capabilities: Private 00:09:15.530 Size (in LBAs): 1048576 (4GiB) 00:09:15.530 Capacity (in LBAs): 1048576 (4GiB) 00:09:15.530 Utilization (in LBAs): 1048576 (4GiB) 00:09:15.530 Thin Provisioning: Not Supported 00:09:15.530 Per-NS Atomic Units: No 00:09:15.530 Maximum Single Source Range Length: 128 00:09:15.530 Maximum Copy Length: 128 00:09:15.530 Maximum Source Range Count: 128 00:09:15.530 NGUID/EUI64 Never Reused: No 00:09:15.530 Namespace Write Protected: No 00:09:15.530 Number of LBA Formats: 8 00:09:15.530 Current LBA Format: LBA Format #04 00:09:15.530 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.530 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.530 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.530 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.530 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.530 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.530 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.530 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.530 00:09:15.530 Namespace ID:2 00:09:15.530 Error Recovery Timeout: Unlimited 00:09:15.530 Command Set Identifier: NVM (00h) 00:09:15.530 Deallocate: Supported 00:09:15.530 Deallocated/Unwritten Error: Supported 00:09:15.530 Deallocated Read Value: All 0x00 00:09:15.530 Deallocate in Write Zeroes: Not Supported 00:09:15.530 Deallocated Guard Field: 0xFFFF 00:09:15.530 Flush: Supported 00:09:15.530 Reservation: Not Supported 00:09:15.530 Namespace Sharing Capabilities: Private 00:09:15.530 Size (in LBAs): 1048576 (4GiB) 00:09:15.530 Capacity (in LBAs): 1048576 (4GiB) 00:09:15.530 Utilization (in LBAs): 1048576 (4GiB) 00:09:15.530 Thin Provisioning: Not Supported 00:09:15.530 Per-NS Atomic Units: No 00:09:15.530 Maximum Single Source Range Length: 128 00:09:15.530 Maximum Copy Length: 128 00:09:15.530 Maximum Source Range Count: 128 00:09:15.530 NGUID/EUI64 Never Reused: No 00:09:15.530 Namespace Write Protected: No 00:09:15.530 Number of LBA Formats: 8 00:09:15.530 Current LBA Format: LBA Format #04 00:09:15.530 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.530 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.530 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.530 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.530 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.530 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.530 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.530 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.530 00:09:15.530 Namespace ID:3 00:09:15.530 Error Recovery Timeout: Unlimited 00:09:15.530 Command Set Identifier: NVM (00h) 00:09:15.530 Deallocate: Supported 00:09:15.530 Deallocated/Unwritten Error: Supported 00:09:15.530 Deallocated Read Value: All 0x00 00:09:15.530 Deallocate in Write Zeroes: Not Supported 00:09:15.530 Deallocated Guard Field: 0xFFFF 00:09:15.530 Flush: Supported 00:09:15.530 Reservation: Not Supported 00:09:15.530 Namespace Sharing Capabilities: Private 00:09:15.530 Size (in LBAs): 1048576 (4GiB) 00:09:15.530 Capacity (in LBAs): 1048576 (4GiB) 00:09:15.530 Utilization (in LBAs): 1048576 (4GiB) 00:09:15.530 Thin Provisioning: Not Supported 00:09:15.530 Per-NS Atomic Units: No 00:09:15.530 Maximum Single Source Range Length: 128 00:09:15.530 Maximum Copy Length: 128 00:09:15.530 Maximum Source Range Count: 128 00:09:15.530 NGUID/EUI64 Never Reused: No 00:09:15.530 Namespace Write Protected: No 00:09:15.530 Number of LBA Formats: 8 00:09:15.530 Current LBA Format: LBA Format #04 00:09:15.530 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.530 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.530 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.530 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.530 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.530 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.530 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.530 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.530 00:09:15.530 00:03:30 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:15.530 00:03:30 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:15.793 ===================================================== 00:09:15.793 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:15.793 ===================================================== 00:09:15.793 Controller Capabilities/Features 00:09:15.793 ================================ 00:09:15.793 Vendor ID: 1b36 00:09:15.793 Subsystem Vendor ID: 1af4 00:09:15.793 Serial Number: 12343 00:09:15.793 Model Number: QEMU NVMe Ctrl 00:09:15.793 Firmware Version: 8.0.0 00:09:15.793 Recommended Arb Burst: 6 00:09:15.793 IEEE OUI Identifier: 00 54 52 00:09:15.793 Multi-path I/O 00:09:15.793 May have multiple subsystem ports: No 00:09:15.793 May have multiple controllers: Yes 00:09:15.793 Associated with SR-IOV VF: No 00:09:15.793 Max Data Transfer Size: 524288 00:09:15.793 Max Number of Namespaces: 256 00:09:15.793 Max Number of I/O Queues: 64 00:09:15.793 NVMe Specification Version (VS): 1.4 00:09:15.793 NVMe Specification Version (Identify): 1.4 00:09:15.793 Maximum Queue Entries: 2048 00:09:15.793 Contiguous Queues Required: Yes 00:09:15.793 Arbitration Mechanisms Supported 00:09:15.793 Weighted Round Robin: Not Supported 00:09:15.793 Vendor Specific: Not Supported 00:09:15.793 Reset Timeout: 7500 ms 00:09:15.793 Doorbell Stride: 4 bytes 00:09:15.793 NVM Subsystem Reset: Not Supported 00:09:15.793 Command Sets Supported 00:09:15.793 NVM Command Set: Supported 00:09:15.793 Boot Partition: Not Supported 00:09:15.793 Memory Page Size Minimum: 4096 bytes 00:09:15.793 Memory Page Size Maximum: 65536 bytes 00:09:15.793 Persistent Memory Region: Not Supported 00:09:15.793 Optional Asynchronous Events Supported 00:09:15.793 Namespace Attribute Notices: Supported 00:09:15.793 Firmware Activation Notices: Not Supported 00:09:15.793 ANA Change Notices: Not Supported 00:09:15.793 PLE Aggregate Log Change Notices: Not Supported 00:09:15.793 LBA Status Info Alert Notices: Not Supported 00:09:15.793 EGE Aggregate Log Change Notices: Not Supported 00:09:15.793 Normal NVM Subsystem Shutdown event: Not Supported 00:09:15.793 Zone Descriptor Change Notices: Not Supported 00:09:15.793 Discovery Log Change Notices: Not Supported 00:09:15.793 Controller Attributes 00:09:15.793 128-bit Host Identifier: Not Supported 00:09:15.793 Non-Operational Permissive Mode: Not Supported 00:09:15.793 NVM Sets: Not Supported 00:09:15.793 Read Recovery Levels: Not Supported 00:09:15.793 Endurance Groups: Supported 00:09:15.793 Predictable Latency Mode: Not Supported 00:09:15.793 Traffic Based Keep ALive: Not Supported 00:09:15.793 Namespace Granularity: Not Supported 00:09:15.793 SQ Associations: Not Supported 00:09:15.793 UUID List: Not Supported 00:09:15.793 Multi-Domain Subsystem: Not Supported 00:09:15.793 Fixed Capacity Management: Not Supported 00:09:15.793 Variable Capacity Management: Not Supported 00:09:15.793 Delete Endurance Group: Not Supported 00:09:15.793 Delete NVM Set: Not Supported 00:09:15.793 Extended LBA Formats Supported: Supported 00:09:15.793 Flexible Data Placement Supported: Supported 00:09:15.793 00:09:15.793 Controller Memory Buffer Support 00:09:15.793 ================================ 00:09:15.793 Supported: No 00:09:15.793 00:09:15.793 Persistent Memory Region Support 00:09:15.793 ================================ 00:09:15.793 Supported: No 00:09:15.793 00:09:15.793 Admin Command Set Attributes 00:09:15.793 ============================ 00:09:15.793 Security Send/Receive: Not Supported 00:09:15.793 Format NVM: Supported 00:09:15.793 Firmware Activate/Download: Not Supported 00:09:15.793 Namespace Management: Supported 00:09:15.793 Device Self-Test: Not Supported 00:09:15.793 Directives: Supported 00:09:15.793 NVMe-MI: Not Supported 00:09:15.793 Virtualization Management: Not Supported 00:09:15.793 Doorbell Buffer Config: Supported 00:09:15.793 Get LBA Status Capability: Not Supported 00:09:15.793 Command & Feature Lockdown Capability: Not Supported 00:09:15.793 Abort Command Limit: 4 00:09:15.793 Async Event Request Limit: 4 00:09:15.793 Number of Firmware Slots: N/A 00:09:15.793 Firmware Slot 1 Read-Only: N/A 00:09:15.793 Firmware Activation Without Reset: N/A 00:09:15.793 Multiple Update Detection Support: N/A 00:09:15.793 Firmware Update Granularity: No Information Provided 00:09:15.793 Per-Namespace SMART Log: Yes 00:09:15.793 Asymmetric Namespace Access Log Page: Not Supported 00:09:15.793 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:15.793 Command Effects Log Page: Supported 00:09:15.793 Get Log Page Extended Data: Supported 00:09:15.793 Telemetry Log Pages: Not Supported 00:09:15.793 Persistent Event Log Pages: Not Supported 00:09:15.793 Supported Log Pages Log Page: May Support 00:09:15.793 Commands Supported & Effects Log Page: Not Supported 00:09:15.793 Feature Identifiers & Effects Log Page:May Support 00:09:15.794 NVMe-MI Commands & Effects Log Page: May Support 00:09:15.794 Data Area 4 for Telemetry Log: Not Supported 00:09:15.794 Error Log Page Entries Supported: 1 00:09:15.794 Keep Alive: Not Supported 00:09:15.794 00:09:15.794 NVM Command Set Attributes 00:09:15.794 ========================== 00:09:15.794 Submission Queue Entry Size 00:09:15.794 Max: 64 00:09:15.794 Min: 64 00:09:15.794 Completion Queue Entry Size 00:09:15.794 Max: 16 00:09:15.794 Min: 16 00:09:15.794 Number of Namespaces: 256 00:09:15.794 Compare Command: Supported 00:09:15.794 Write Uncorrectable Command: Not Supported 00:09:15.794 Dataset Management Command: Supported 00:09:15.794 Write Zeroes Command: Supported 00:09:15.794 Set Features Save Field: Supported 00:09:15.794 Reservations: Not Supported 00:09:15.794 Timestamp: Supported 00:09:15.794 Copy: Supported 00:09:15.794 Volatile Write Cache: Present 00:09:15.794 Atomic Write Unit (Normal): 1 00:09:15.794 Atomic Write Unit (PFail): 1 00:09:15.794 Atomic Compare & Write Unit: 1 00:09:15.794 Fused Compare & Write: Not Supported 00:09:15.794 Scatter-Gather List 00:09:15.794 SGL Command Set: Supported 00:09:15.794 SGL Keyed: Not Supported 00:09:15.794 SGL Bit Bucket Descriptor: Not Supported 00:09:15.794 SGL Metadata Pointer: Not Supported 00:09:15.794 Oversized SGL: Not Supported 00:09:15.794 SGL Metadata Address: Not Supported 00:09:15.794 SGL Offset: Not Supported 00:09:15.794 Transport SGL Data Block: Not Supported 00:09:15.794 Replay Protected Memory Block: Not Supported 00:09:15.794 00:09:15.794 Firmware Slot Information 00:09:15.794 ========================= 00:09:15.794 Active slot: 1 00:09:15.794 Slot 1 Firmware Revision: 1.0 00:09:15.794 00:09:15.794 00:09:15.794 Commands Supported and Effects 00:09:15.794 ============================== 00:09:15.794 Admin Commands 00:09:15.794 -------------- 00:09:15.794 Delete I/O Submission Queue (00h): Supported 00:09:15.794 Create I/O Submission Queue (01h): Supported 00:09:15.794 Get Log Page (02h): Supported 00:09:15.794 Delete I/O Completion Queue (04h): Supported 00:09:15.794 Create I/O Completion Queue (05h): Supported 00:09:15.794 Identify (06h): Supported 00:09:15.794 Abort (08h): Supported 00:09:15.794 Set Features (09h): Supported 00:09:15.794 Get Features (0Ah): Supported 00:09:15.794 Asynchronous Event Request (0Ch): Supported 00:09:15.794 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:15.794 Directive Send (19h): Supported 00:09:15.794 Directive Receive (1Ah): Supported 00:09:15.794 Virtualization Management (1Ch): Supported 00:09:15.794 Doorbell Buffer Config (7Ch): Supported 00:09:15.794 Format NVM (80h): Supported LBA-Change 00:09:15.794 I/O Commands 00:09:15.794 ------------ 00:09:15.794 Flush (00h): Supported LBA-Change 00:09:15.794 Write (01h): Supported LBA-Change 00:09:15.794 Read (02h): Supported 00:09:15.794 Compare (05h): Supported 00:09:15.794 Write Zeroes (08h): Supported LBA-Change 00:09:15.794 Dataset Management (09h): Supported LBA-Change 00:09:15.794 Unknown (0Ch): Supported 00:09:15.794 Unknown (12h): Supported 00:09:15.794 Copy (19h): Supported LBA-Change 00:09:15.794 Unknown (1Dh): Supported LBA-Change 00:09:15.794 00:09:15.794 Error Log 00:09:15.794 ========= 00:09:15.794 00:09:15.794 Arbitration 00:09:15.794 =========== 00:09:15.794 Arbitration Burst: no limit 00:09:15.794 00:09:15.794 Power Management 00:09:15.794 ================ 00:09:15.794 Number of Power States: 1 00:09:15.794 Current Power State: Power State #0 00:09:15.794 Power State #0: 00:09:15.794 Max Power: 25.00 W 00:09:15.794 Non-Operational State: Operational 00:09:15.794 Entry Latency: 16 microseconds 00:09:15.794 Exit Latency: 4 microseconds 00:09:15.794 Relative Read Throughput: 0 00:09:15.794 Relative Read Latency: 0 00:09:15.794 Relative Write Throughput: 0 00:09:15.794 Relative Write Latency: 0 00:09:15.794 Idle Power: Not Reported 00:09:15.794 Active Power: Not Reported 00:09:15.794 Non-Operational Permissive Mode: Not Supported 00:09:15.794 00:09:15.794 Health Information 00:09:15.794 ================== 00:09:15.794 Critical Warnings: 00:09:15.794 Available Spare Space: OK 00:09:15.794 Temperature: OK 00:09:15.794 Device Reliability: OK 00:09:15.794 Read Only: No 00:09:15.794 Volatile Memory Backup: OK 00:09:15.794 Current Temperature: 323 Kelvin (50 Celsius) 00:09:15.794 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:15.794 Available Spare: 0% 00:09:15.794 Available Spare Threshold: 0% 00:09:15.794 Life Percentage Used: 0% 00:09:15.794 Data Units Read: 1296 00:09:15.794 Data Units Written: 601 00:09:15.794 Host Read Commands: 61435 00:09:15.794 Host Write Commands: 30216 00:09:15.794 Controller Busy Time: 0 minutes 00:09:15.794 Power Cycles: 0 00:09:15.794 Power On Hours: 0 hours 00:09:15.794 Unsafe Shutdowns: 0 00:09:15.794 Unrecoverable Media Errors: 0 00:09:15.794 Lifetime Error Log Entries: 0 00:09:15.794 Warning Temperature Time: 0 minutes 00:09:15.794 Critical Temperature Time: 0 minutes 00:09:15.794 00:09:15.794 Number of Queues 00:09:15.794 ================ 00:09:15.794 Number of I/O Submission Queues: 64 00:09:15.794 Number of I/O Completion Queues: 64 00:09:15.794 00:09:15.794 ZNS Specific Controller Data 00:09:15.794 ============================ 00:09:15.794 Zone Append Size Limit: 0 00:09:15.794 00:09:15.794 00:09:15.794 Active Namespaces 00:09:15.794 ================= 00:09:15.794 Namespace ID:1 00:09:15.794 Error Recovery Timeout: Unlimited 00:09:15.794 Command Set Identifier: NVM (00h) 00:09:15.794 Deallocate: Supported 00:09:15.794 Deallocated/Unwritten Error: Supported 00:09:15.794 Deallocated Read Value: All 0x00 00:09:15.794 Deallocate in Write Zeroes: Not Supported 00:09:15.794 Deallocated Guard Field: 0xFFFF 00:09:15.794 Flush: Supported 00:09:15.794 Reservation: Not Supported 00:09:15.794 Namespace Sharing Capabilities: Multiple Controllers 00:09:15.794 Size (in LBAs): 262144 (1GiB) 00:09:15.794 Capacity (in LBAs): 262144 (1GiB) 00:09:15.794 Utilization (in LBAs): 262144 (1GiB) 00:09:15.795 Thin Provisioning: Not Supported 00:09:15.795 Per-NS Atomic Units: No 00:09:15.795 Maximum Single Source Range Length: 128 00:09:15.795 Maximum Copy Length: 128 00:09:15.795 Maximum Source Range Count: 128 00:09:15.795 NGUID/EUI64 Never Reused: No 00:09:15.795 Namespace Write Protected: No 00:09:15.795 Endurance group ID: 1 00:09:15.795 Number of LBA Formats: 8 00:09:15.795 Current LBA Format: LBA Format #04 00:09:15.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:15.795 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:15.795 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:15.795 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:15.795 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:15.795 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:15.795 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:15.795 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:15.795 00:09:15.795 Get Feature FDP: 00:09:15.795 ================ 00:09:15.795 Enabled: Yes 00:09:15.795 FDP configuration index: 0 00:09:15.795 00:09:15.795 FDP configurations log page 00:09:15.795 =========================== 00:09:15.795 Number of FDP configurations: 1 00:09:15.795 Version: 0 00:09:15.795 Size: 112 00:09:15.795 FDP Configuration Descriptor: 0 00:09:15.795 Descriptor Size: 96 00:09:15.795 Reclaim Group Identifier format: 2 00:09:15.795 FDP Volatile Write Cache: Not Present 00:09:15.795 FDP Configuration: Valid 00:09:15.795 Vendor Specific Size: 0 00:09:15.795 Number of Reclaim Groups: 2 00:09:15.795 Number of Recalim Unit Handles: 8 00:09:15.795 Max Placement Identifiers: 128 00:09:15.795 Number of Namespaces Suppprted: 256 00:09:15.795 Reclaim unit Nominal Size: 6000000 bytes 00:09:15.795 Estimated Reclaim Unit Time Limit: Not Reported 00:09:15.795 RUH Desc #000: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #001: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #002: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #003: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #004: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #005: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #006: RUH Type: Initially Isolated 00:09:15.795 RUH Desc #007: RUH Type: Initially Isolated 00:09:15.795 00:09:15.795 FDP reclaim unit handle usage log page 00:09:15.795 ====================================== 00:09:15.795 Number of Reclaim Unit Handles: 8 00:09:15.795 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:15.795 RUH Usage Desc #001: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #002: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #003: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #004: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #005: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #006: RUH Attributes: Unused 00:09:15.795 RUH Usage Desc #007: RUH Attributes: Unused 00:09:15.795 00:09:15.795 FDP statistics log page 00:09:15.795 ======================= 00:09:15.795 Host bytes with metadata written: 408064000 00:09:15.795 Media bytes with metadata written: 408158208 00:09:15.795 Media bytes erased: 0 00:09:15.795 00:09:15.795 FDP events log page 00:09:15.795 =================== 00:09:15.795 Number of FDP events: 0 00:09:15.795 00:09:15.795 00:09:15.795 real 0m1.154s 00:09:15.795 user 0m0.386s 00:09:15.795 sys 0m0.556s 00:09:15.795 00:03:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.795 00:03:30 -- common/autotest_common.sh@10 -- # set +x 00:09:15.795 ************************************ 00:09:15.795 END TEST nvme_identify 00:09:15.795 ************************************ 00:09:15.795 00:03:30 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:15.795 00:03:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.795 00:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.795 00:03:30 -- common/autotest_common.sh@10 -- # set +x 00:09:15.795 ************************************ 00:09:15.795 START TEST nvme_perf 00:09:15.795 ************************************ 00:09:15.795 00:03:30 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:15.795 00:03:30 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:17.188 Initializing NVMe Controllers 00:09:17.188 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:17.188 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:17.188 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:17.188 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:17.188 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:17.188 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:17.188 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:17.188 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:17.188 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:17.188 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:17.188 Initialization complete. Launching workers. 00:09:17.188 ======================================================== 00:09:17.188 Latency(us) 00:09:17.188 Device Information : IOPS MiB/s Average min max 00:09:17.188 PCIE (0000:00:09.0) NSID 1 from core 0: 10796.26 126.52 11850.75 7095.71 31023.83 00:09:17.188 PCIE (0000:00:06.0) NSID 1 from core 0: 10796.26 126.52 11849.43 6885.29 32104.11 00:09:17.188 PCIE (0000:00:07.0) NSID 1 from core 0: 10796.26 126.52 11840.59 7020.72 32572.00 00:09:17.188 PCIE (0000:00:08.0) NSID 1 from core 0: 10796.26 126.52 11830.32 5355.44 34812.42 00:09:17.188 PCIE (0000:00:08.0) NSID 2 from core 0: 10796.26 126.52 11820.46 4537.84 35449.53 00:09:17.188 PCIE (0000:00:08.0) NSID 3 from core 0: 10923.28 128.01 11673.04 3859.40 23070.54 00:09:17.188 ======================================================== 00:09:17.188 Total : 64904.60 760.60 11810.50 3859.40 35449.53 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 7410.609us 00:09:17.188 10.00000% : 7864.320us 00:09:17.188 25.00000% : 8418.855us 00:09:17.188 50.00000% : 9880.812us 00:09:17.188 75.00000% : 15123.692us 00:09:17.188 90.00000% : 17241.009us 00:09:17.188 95.00000% : 18753.378us 00:09:17.188 98.00000% : 20164.923us 00:09:17.188 99.00000% : 29239.138us 00:09:17.188 99.50000% : 30247.385us 00:09:17.188 99.90000% : 30852.332us 00:09:17.188 99.99000% : 31053.982us 00:09:17.188 99.99900% : 31053.982us 00:09:17.188 99.99990% : 31053.982us 00:09:17.188 99.99999% : 31053.982us 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 7259.372us 00:09:17.188 10.00000% : 7763.495us 00:09:17.188 25.00000% : 8418.855us 00:09:17.188 50.00000% : 9830.400us 00:09:17.188 75.00000% : 15022.868us 00:09:17.188 90.00000% : 17241.009us 00:09:17.188 95.00000% : 18854.203us 00:09:17.188 98.00000% : 20669.046us 00:09:17.188 99.00000% : 30045.735us 00:09:17.188 99.50000% : 31255.631us 00:09:17.188 99.90000% : 32062.228us 00:09:17.188 99.99000% : 32263.877us 00:09:17.188 99.99900% : 32263.877us 00:09:17.188 99.99990% : 32263.877us 00:09:17.188 99.99999% : 32263.877us 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 7360.197us 00:09:17.188 10.00000% : 7813.908us 00:09:17.188 25.00000% : 8368.443us 00:09:17.188 50.00000% : 9779.988us 00:09:17.188 75.00000% : 15224.517us 00:09:17.188 90.00000% : 16938.535us 00:09:17.188 95.00000% : 18652.554us 00:09:17.188 98.00000% : 19963.274us 00:09:17.188 99.00000% : 30650.683us 00:09:17.188 99.50000% : 31658.929us 00:09:17.188 99.90000% : 32465.526us 00:09:17.188 99.99000% : 32667.175us 00:09:17.188 99.99900% : 32667.175us 00:09:17.188 99.99990% : 32667.175us 00:09:17.188 99.99999% : 32667.175us 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 6553.600us 00:09:17.188 10.00000% : 7813.908us 00:09:17.188 25.00000% : 8368.443us 00:09:17.188 50.00000% : 9729.575us 00:09:17.188 75.00000% : 15123.692us 00:09:17.188 90.00000% : 17140.185us 00:09:17.188 95.00000% : 18350.080us 00:09:17.188 98.00000% : 19761.625us 00:09:17.188 99.00000% : 32868.825us 00:09:17.188 99.50000% : 33877.071us 00:09:17.188 99.90000% : 34683.668us 00:09:17.188 99.99000% : 34885.317us 00:09:17.188 99.99900% : 34885.317us 00:09:17.188 99.99990% : 34885.317us 00:09:17.188 99.99999% : 34885.317us 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 5747.003us 00:09:17.188 10.00000% : 7813.908us 00:09:17.188 25.00000% : 8368.443us 00:09:17.188 50.00000% : 9679.163us 00:09:17.188 75.00000% : 15123.692us 00:09:17.188 90.00000% : 17140.185us 00:09:17.188 95.00000% : 18551.729us 00:09:17.188 98.00000% : 20164.923us 00:09:17.188 99.00000% : 33473.772us 00:09:17.188 99.50000% : 34482.018us 00:09:17.188 99.90000% : 35288.615us 00:09:17.188 99.99000% : 35490.265us 00:09:17.188 99.99900% : 35490.265us 00:09:17.188 99.99990% : 35490.265us 00:09:17.188 99.99999% : 35490.265us 00:09:17.188 00:09:17.188 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:17.188 ================================================================================= 00:09:17.188 1.00000% : 5142.055us 00:09:17.188 10.00000% : 7813.908us 00:09:17.188 25.00000% : 8368.443us 00:09:17.188 50.00000% : 9779.988us 00:09:17.188 75.00000% : 15022.868us 00:09:17.188 90.00000% : 17341.834us 00:09:17.188 95.00000% : 18652.554us 00:09:17.188 98.00000% : 19862.449us 00:09:17.188 99.00000% : 21273.994us 00:09:17.188 99.50000% : 22080.591us 00:09:17.188 99.90000% : 22887.188us 00:09:17.188 99.99000% : 23088.837us 00:09:17.188 99.99900% : 23088.837us 00:09:17.188 99.99990% : 23088.837us 00:09:17.188 99.99999% : 23088.837us 00:09:17.188 00:09:17.188 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:17.188 ============================================================================== 00:09:17.188 Range in us Cumulative IO count 00:09:17.188 7057.723 - 7108.135: 0.0092% ( 1) 00:09:17.188 7108.135 - 7158.548: 0.0551% ( 5) 00:09:17.188 7158.548 - 7208.960: 0.1471% ( 10) 00:09:17.188 7208.960 - 7259.372: 0.3217% ( 19) 00:09:17.188 7259.372 - 7309.785: 0.5331% ( 23) 00:09:17.188 7309.785 - 7360.197: 0.8364% ( 33) 00:09:17.188 7360.197 - 7410.609: 1.4246% ( 64) 00:09:17.188 7410.609 - 7461.022: 2.1875% ( 83) 00:09:17.188 7461.022 - 7511.434: 2.9596% ( 84) 00:09:17.188 7511.434 - 7561.846: 3.7960% ( 91) 00:09:17.188 7561.846 - 7612.258: 4.7335% ( 102) 00:09:17.188 7612.258 - 7662.671: 5.7629% ( 112) 00:09:17.188 7662.671 - 7713.083: 6.9853% ( 133) 00:09:17.188 7713.083 - 7763.495: 8.2629% ( 139) 00:09:17.188 7763.495 - 7813.908: 9.6967% ( 156) 00:09:17.188 7813.908 - 7864.320: 11.1121% ( 154) 00:09:17.188 7864.320 - 7914.732: 12.4540% ( 146) 00:09:17.188 7914.732 - 7965.145: 13.8971% ( 157) 00:09:17.188 7965.145 - 8015.557: 15.2574% ( 148) 00:09:17.188 8015.557 - 8065.969: 16.6085% ( 147) 00:09:17.188 8065.969 - 8116.382: 18.0239% ( 154) 00:09:17.188 8116.382 - 8166.794: 19.4301% ( 153) 00:09:17.188 8166.794 - 8217.206: 20.7904% ( 148) 00:09:17.188 8217.206 - 8267.618: 22.1875% ( 152) 00:09:17.188 8267.618 - 8318.031: 23.5938% ( 153) 00:09:17.188 8318.031 - 8368.443: 24.9816% ( 151) 00:09:17.188 8368.443 - 8418.855: 26.3603% ( 150) 00:09:17.188 8418.855 - 8469.268: 27.7574% ( 152) 00:09:17.188 8469.268 - 8519.680: 29.1728% ( 154) 00:09:17.188 8519.680 - 8570.092: 30.5055% ( 145) 00:09:17.188 8570.092 - 8620.505: 31.9210% ( 154) 00:09:17.188 8620.505 - 8670.917: 33.2996% ( 150) 00:09:17.188 8670.917 - 8721.329: 34.6875% ( 151) 00:09:17.188 8721.329 - 8771.742: 36.1029% ( 154) 00:09:17.188 8771.742 - 8822.154: 37.4908% ( 151) 00:09:17.188 8822.154 - 8872.566: 38.8971% ( 153) 00:09:17.188 8872.566 - 8922.978: 40.2757% ( 150) 00:09:17.188 8922.978 - 8973.391: 41.5809% ( 142) 00:09:17.188 8973.391 - 9023.803: 42.8033% ( 133) 00:09:17.188 9023.803 - 9074.215: 43.8419% ( 113) 00:09:17.188 9074.215 - 9124.628: 44.7610% ( 100) 00:09:17.188 9124.628 - 9175.040: 45.5699% ( 88) 00:09:17.188 9175.040 - 9225.452: 46.2408% ( 73) 00:09:17.188 9225.452 - 9275.865: 46.8382% ( 65) 00:09:17.188 9275.865 - 9326.277: 47.4173% ( 63) 00:09:17.188 9326.277 - 9376.689: 47.8952% ( 52) 00:09:17.188 9376.689 - 9427.102: 48.3180% ( 46) 00:09:17.188 9427.102 - 9477.514: 48.6029% ( 31) 00:09:17.188 9477.514 - 9527.926: 48.8511% ( 27) 00:09:17.188 9527.926 - 9578.338: 49.0901% ( 26) 00:09:17.188 9578.338 - 9628.751: 49.2739% ( 20) 00:09:17.188 9628.751 - 9679.163: 49.4301% ( 17) 00:09:17.188 9679.163 - 9729.575: 49.5864% ( 17) 00:09:17.188 9729.575 - 9779.988: 49.7518% ( 18) 00:09:17.188 9779.988 - 9830.400: 49.8989% ( 16) 00:09:17.188 9830.400 - 9880.812: 50.0551% ( 17) 00:09:17.188 9880.812 - 9931.225: 50.2298% ( 19) 00:09:17.188 9931.225 - 9981.637: 50.3676% ( 15) 00:09:17.188 9981.637 - 10032.049: 50.5515% ( 20) 00:09:17.188 10032.049 - 10082.462: 50.6893% ( 15) 00:09:17.188 10082.462 - 10132.874: 50.8640% ( 19) 00:09:17.188 10132.874 - 10183.286: 51.0294% ( 18) 00:09:17.188 10183.286 - 10233.698: 51.1949% ( 18) 00:09:17.188 10233.698 - 10284.111: 51.3971% ( 22) 00:09:17.188 10284.111 - 10334.523: 51.5717% ( 19) 00:09:17.188 10334.523 - 10384.935: 51.7923% ( 24) 00:09:17.188 10384.935 - 10435.348: 52.0037% ( 23) 00:09:17.188 10435.348 - 10485.760: 52.2335% ( 25) 00:09:17.188 10485.760 - 10536.172: 52.4816% ( 27) 00:09:17.188 10536.172 - 10586.585: 52.7206% ( 26) 00:09:17.188 10586.585 - 10636.997: 52.9779% ( 28) 00:09:17.188 10636.997 - 10687.409: 53.2077% ( 25) 00:09:17.188 10687.409 - 10737.822: 53.4559% ( 27) 00:09:17.188 10737.822 - 10788.234: 53.6489% ( 21) 00:09:17.188 10788.234 - 10838.646: 53.8603% ( 23) 00:09:17.188 10838.646 - 10889.058: 54.0349% ( 19) 00:09:17.188 10889.058 - 10939.471: 54.1728% ( 15) 00:09:17.188 10939.471 - 10989.883: 54.3015% ( 14) 00:09:17.188 10989.883 - 11040.295: 54.4393% ( 15) 00:09:17.188 11040.295 - 11090.708: 54.5588% ( 13) 00:09:17.188 11090.708 - 11141.120: 54.6691% ( 12) 00:09:17.188 11141.120 - 11191.532: 54.7886% ( 13) 00:09:17.188 11191.532 - 11241.945: 54.8989% ( 12) 00:09:17.188 11241.945 - 11292.357: 55.0184% ( 13) 00:09:17.188 11292.357 - 11342.769: 55.1195% ( 11) 00:09:17.188 11342.769 - 11393.182: 55.2390% ( 13) 00:09:17.188 11393.182 - 11443.594: 55.3768% ( 15) 00:09:17.188 11443.594 - 11494.006: 55.5239% ( 16) 00:09:17.188 11494.006 - 11544.418: 55.6618% ( 15) 00:09:17.188 11544.418 - 11594.831: 55.7904% ( 14) 00:09:17.188 11594.831 - 11645.243: 55.9375% ( 16) 00:09:17.188 11645.243 - 11695.655: 56.0938% ( 17) 00:09:17.188 11695.655 - 11746.068: 56.2408% ( 16) 00:09:17.188 11746.068 - 11796.480: 56.3879% ( 16) 00:09:17.188 11796.480 - 11846.892: 56.5257% ( 15) 00:09:17.188 11846.892 - 11897.305: 56.6912% ( 18) 00:09:17.188 11897.305 - 11947.717: 56.8382% ( 16) 00:09:17.188 11947.717 - 11998.129: 56.9945% ( 17) 00:09:17.188 11998.129 - 12048.542: 57.1415% ( 16) 00:09:17.188 12048.542 - 12098.954: 57.2978% ( 17) 00:09:17.188 12098.954 - 12149.366: 57.4357% ( 15) 00:09:17.188 12149.366 - 12199.778: 57.5827% ( 16) 00:09:17.188 12199.778 - 12250.191: 57.7390% ( 17) 00:09:17.188 12250.191 - 12300.603: 57.8493% ( 12) 00:09:17.188 12300.603 - 12351.015: 57.9688% ( 13) 00:09:17.188 12351.015 - 12401.428: 58.0515% ( 9) 00:09:17.188 12401.428 - 12451.840: 58.1434% ( 10) 00:09:17.189 12451.840 - 12502.252: 58.2261% ( 9) 00:09:17.189 12502.252 - 12552.665: 58.3272% ( 11) 00:09:17.189 12552.665 - 12603.077: 58.4467% ( 13) 00:09:17.189 12603.077 - 12653.489: 58.5386% ( 10) 00:09:17.189 12653.489 - 12703.902: 58.6213% ( 9) 00:09:17.189 12703.902 - 12754.314: 58.7132% ( 10) 00:09:17.189 12754.314 - 12804.726: 58.8235% ( 12) 00:09:17.189 12804.726 - 12855.138: 58.9982% ( 19) 00:09:17.189 12855.138 - 12905.551: 59.1176% ( 13) 00:09:17.189 12905.551 - 13006.375: 59.3934% ( 30) 00:09:17.189 13006.375 - 13107.200: 59.6967% ( 33) 00:09:17.189 13107.200 - 13208.025: 60.1287% ( 47) 00:09:17.189 13208.025 - 13308.849: 60.4963% ( 40) 00:09:17.189 13308.849 - 13409.674: 60.9835% ( 53) 00:09:17.189 13409.674 - 13510.498: 61.5625% ( 63) 00:09:17.189 13510.498 - 13611.323: 62.1415% ( 63) 00:09:17.189 13611.323 - 13712.148: 62.8033% ( 72) 00:09:17.189 13712.148 - 13812.972: 63.4835% ( 74) 00:09:17.189 13812.972 - 13913.797: 64.1268% ( 70) 00:09:17.189 13913.797 - 14014.622: 64.7886% ( 72) 00:09:17.189 14014.622 - 14115.446: 65.4779% ( 75) 00:09:17.189 14115.446 - 14216.271: 66.2960% ( 89) 00:09:17.189 14216.271 - 14317.095: 67.2794% ( 107) 00:09:17.189 14317.095 - 14417.920: 68.3088% ( 112) 00:09:17.189 14417.920 - 14518.745: 69.4118% ( 120) 00:09:17.189 14518.745 - 14619.569: 70.5607% ( 125) 00:09:17.189 14619.569 - 14720.394: 71.6728% ( 121) 00:09:17.189 14720.394 - 14821.218: 72.7574% ( 118) 00:09:17.189 14821.218 - 14922.043: 73.8695% ( 121) 00:09:17.189 14922.043 - 15022.868: 74.9816% ( 121) 00:09:17.189 15022.868 - 15123.692: 76.1213% ( 124) 00:09:17.189 15123.692 - 15224.517: 77.1967% ( 117) 00:09:17.189 15224.517 - 15325.342: 78.2169% ( 111) 00:09:17.189 15325.342 - 15426.166: 79.2004% ( 107) 00:09:17.189 15426.166 - 15526.991: 80.0735% ( 95) 00:09:17.189 15526.991 - 15627.815: 80.8915% ( 89) 00:09:17.189 15627.815 - 15728.640: 81.7096% ( 89) 00:09:17.189 15728.640 - 15829.465: 82.4724% ( 83) 00:09:17.189 15829.465 - 15930.289: 83.2353% ( 83) 00:09:17.189 15930.289 - 16031.114: 83.9062% ( 73) 00:09:17.189 16031.114 - 16131.938: 84.5312% ( 68) 00:09:17.189 16131.938 - 16232.763: 85.1562% ( 68) 00:09:17.189 16232.763 - 16333.588: 85.7445% ( 64) 00:09:17.189 16333.588 - 16434.412: 86.3327% ( 64) 00:09:17.189 16434.412 - 16535.237: 86.9118% ( 63) 00:09:17.189 16535.237 - 16636.062: 87.3989% ( 53) 00:09:17.189 16636.062 - 16736.886: 87.8217% ( 46) 00:09:17.189 16736.886 - 16837.711: 88.2077% ( 42) 00:09:17.189 16837.711 - 16938.535: 88.6305% ( 46) 00:09:17.189 16938.535 - 17039.360: 89.1360% ( 55) 00:09:17.189 17039.360 - 17140.185: 89.6140% ( 52) 00:09:17.189 17140.185 - 17241.009: 90.0092% ( 43) 00:09:17.189 17241.009 - 17341.834: 90.4596% ( 49) 00:09:17.189 17341.834 - 17442.658: 90.8732% ( 45) 00:09:17.189 17442.658 - 17543.483: 91.2224% ( 38) 00:09:17.189 17543.483 - 17644.308: 91.5533% ( 36) 00:09:17.189 17644.308 - 17745.132: 91.9393% ( 42) 00:09:17.189 17745.132 - 17845.957: 92.3070% ( 40) 00:09:17.189 17845.957 - 17946.782: 92.6562% ( 38) 00:09:17.189 17946.782 - 18047.606: 93.0055% ( 38) 00:09:17.189 18047.606 - 18148.431: 93.3088% ( 33) 00:09:17.189 18148.431 - 18249.255: 93.6121% ( 33) 00:09:17.189 18249.255 - 18350.080: 93.9246% ( 34) 00:09:17.189 18350.080 - 18450.905: 94.3199% ( 43) 00:09:17.189 18450.905 - 18551.729: 94.6415% ( 35) 00:09:17.189 18551.729 - 18652.554: 94.9816% ( 37) 00:09:17.189 18652.554 - 18753.378: 95.2941% ( 34) 00:09:17.189 18753.378 - 18854.203: 95.6250% ( 36) 00:09:17.189 18854.203 - 18955.028: 95.9467% ( 35) 00:09:17.189 18955.028 - 19055.852: 96.2776% ( 36) 00:09:17.189 19055.852 - 19156.677: 96.5625% ( 31) 00:09:17.189 19156.677 - 19257.502: 96.8199% ( 28) 00:09:17.189 19257.502 - 19358.326: 97.0588% ( 26) 00:09:17.189 19358.326 - 19459.151: 97.2610% ( 22) 00:09:17.189 19459.151 - 19559.975: 97.4173% ( 17) 00:09:17.189 19559.975 - 19660.800: 97.5551% ( 15) 00:09:17.189 19660.800 - 19761.625: 97.6930% ( 15) 00:09:17.189 19761.625 - 19862.449: 97.7941% ( 11) 00:09:17.189 19862.449 - 19963.274: 97.8676% ( 8) 00:09:17.189 19963.274 - 20064.098: 97.9688% ( 11) 00:09:17.189 20064.098 - 20164.923: 98.0515% ( 9) 00:09:17.189 20164.923 - 20265.748: 98.1618% ( 12) 00:09:17.189 20265.748 - 20366.572: 98.2537% ( 10) 00:09:17.189 20366.572 - 20467.397: 98.3364% ( 9) 00:09:17.189 20467.397 - 20568.222: 98.4375% ( 11) 00:09:17.189 20568.222 - 20669.046: 98.5294% ( 10) 00:09:17.189 20669.046 - 20769.871: 98.6121% ( 9) 00:09:17.189 20769.871 - 20870.695: 98.7040% ( 10) 00:09:17.189 20870.695 - 20971.520: 98.7408% ( 4) 00:09:17.189 20971.520 - 21072.345: 98.7868% ( 5) 00:09:17.189 21072.345 - 21173.169: 98.8143% ( 3) 00:09:17.189 21173.169 - 21273.994: 98.8235% ( 1) 00:09:17.189 28634.191 - 28835.840: 98.8603% ( 4) 00:09:17.189 28835.840 - 29037.489: 98.9706% ( 12) 00:09:17.189 29037.489 - 29239.138: 99.0809% ( 12) 00:09:17.189 29239.138 - 29440.788: 99.1820% ( 11) 00:09:17.189 29440.788 - 29642.437: 99.2923% ( 12) 00:09:17.189 29642.437 - 29844.086: 99.3934% ( 11) 00:09:17.189 29844.086 - 30045.735: 99.4853% ( 10) 00:09:17.189 30045.735 - 30247.385: 99.5956% ( 12) 00:09:17.189 30247.385 - 30449.034: 99.6967% ( 11) 00:09:17.189 30449.034 - 30650.683: 99.8070% ( 12) 00:09:17.189 30650.683 - 30852.332: 99.9173% ( 12) 00:09:17.189 30852.332 - 31053.982: 100.0000% ( 9) 00:09:17.189 00:09:17.189 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:17.189 ============================================================================== 00:09:17.189 Range in us Cumulative IO count 00:09:17.189 6856.074 - 6906.486: 0.0092% ( 1) 00:09:17.189 6906.486 - 6956.898: 0.0368% ( 3) 00:09:17.189 6956.898 - 7007.311: 0.1103% ( 8) 00:09:17.189 7007.311 - 7057.723: 0.1746% ( 7) 00:09:17.189 7057.723 - 7108.135: 0.3860% ( 23) 00:09:17.189 7108.135 - 7158.548: 0.7169% ( 36) 00:09:17.189 7158.548 - 7208.960: 0.9835% ( 29) 00:09:17.189 7208.960 - 7259.372: 1.4430% ( 50) 00:09:17.189 7259.372 - 7309.785: 1.9118% ( 51) 00:09:17.189 7309.785 - 7360.197: 2.6103% ( 76) 00:09:17.189 7360.197 - 7410.609: 3.2812% ( 73) 00:09:17.189 7410.609 - 7461.022: 4.0717% ( 86) 00:09:17.189 7461.022 - 7511.434: 4.9173% ( 92) 00:09:17.189 7511.434 - 7561.846: 5.9743% ( 115) 00:09:17.189 7561.846 - 7612.258: 6.9669% ( 108) 00:09:17.189 7612.258 - 7662.671: 8.1526% ( 129) 00:09:17.189 7662.671 - 7713.083: 9.3107% ( 126) 00:09:17.189 7713.083 - 7763.495: 10.4504% ( 124) 00:09:17.189 7763.495 - 7813.908: 11.5533% ( 120) 00:09:17.189 7813.908 - 7864.320: 12.7757% ( 133) 00:09:17.189 7864.320 - 7914.732: 13.8787% ( 120) 00:09:17.189 7914.732 - 7965.145: 15.0827% ( 131) 00:09:17.189 7965.145 - 8015.557: 16.3327% ( 136) 00:09:17.189 8015.557 - 8065.969: 17.4173% ( 118) 00:09:17.189 8065.969 - 8116.382: 18.5570% ( 124) 00:09:17.189 8116.382 - 8166.794: 19.7978% ( 135) 00:09:17.189 8166.794 - 8217.206: 20.8824% ( 118) 00:09:17.189 8217.206 - 8267.618: 21.9945% ( 121) 00:09:17.189 8267.618 - 8318.031: 23.1618% ( 127) 00:09:17.189 8318.031 - 8368.443: 24.3107% ( 125) 00:09:17.189 8368.443 - 8418.855: 25.5974% ( 140) 00:09:17.189 8418.855 - 8469.268: 26.7463% ( 125) 00:09:17.189 8469.268 - 8519.680: 27.8309% ( 118) 00:09:17.189 8519.680 - 8570.092: 29.0441% ( 132) 00:09:17.189 8570.092 - 8620.505: 30.2574% ( 132) 00:09:17.189 8620.505 - 8670.917: 31.4890% ( 134) 00:09:17.189 8670.917 - 8721.329: 32.6930% ( 131) 00:09:17.189 8721.329 - 8771.742: 33.8879% ( 130) 00:09:17.189 8771.742 - 8822.154: 35.1746% ( 140) 00:09:17.189 8822.154 - 8872.566: 36.2960% ( 122) 00:09:17.189 8872.566 - 8922.978: 37.5919% ( 141) 00:09:17.189 8922.978 - 8973.391: 38.8787% ( 140) 00:09:17.189 8973.391 - 9023.803: 40.1838% ( 142) 00:09:17.189 9023.803 - 9074.215: 41.4062% ( 133) 00:09:17.189 9074.215 - 9124.628: 42.5551% ( 125) 00:09:17.189 9124.628 - 9175.040: 43.6581% ( 120) 00:09:17.189 9175.040 - 9225.452: 44.5864% ( 101) 00:09:17.189 9225.452 - 9275.865: 45.4688% ( 96) 00:09:17.189 9275.865 - 9326.277: 46.2224% ( 82) 00:09:17.189 9326.277 - 9376.689: 46.8382% ( 67) 00:09:17.189 9376.689 - 9427.102: 47.4357% ( 65) 00:09:17.189 9427.102 - 9477.514: 48.0147% ( 63) 00:09:17.189 9477.514 - 9527.926: 48.3640% ( 38) 00:09:17.189 9527.926 - 9578.338: 48.7500% ( 42) 00:09:17.189 9578.338 - 9628.751: 49.0625% ( 34) 00:09:17.189 9628.751 - 9679.163: 49.4761% ( 45) 00:09:17.189 9679.163 - 9729.575: 49.6507% ( 19) 00:09:17.189 9729.575 - 9779.988: 49.8713% ( 24) 00:09:17.189 9779.988 - 9830.400: 50.0460% ( 19) 00:09:17.189 9830.400 - 9880.812: 50.2574% ( 23) 00:09:17.189 9880.812 - 9931.225: 50.4871% ( 25) 00:09:17.189 9931.225 - 9981.637: 50.6985% ( 23) 00:09:17.189 9981.637 - 10032.049: 50.8915% ( 21) 00:09:17.189 10032.049 - 10082.462: 51.1213% ( 25) 00:09:17.189 10082.462 - 10132.874: 51.3051% ( 20) 00:09:17.189 10132.874 - 10183.286: 51.5625% ( 28) 00:09:17.189 10183.286 - 10233.698: 51.8107% ( 27) 00:09:17.189 10233.698 - 10284.111: 52.0312% ( 24) 00:09:17.189 10284.111 - 10334.523: 52.2426% ( 23) 00:09:17.189 10334.523 - 10384.935: 52.4908% ( 27) 00:09:17.189 10384.935 - 10435.348: 52.7298% ( 26) 00:09:17.189 10435.348 - 10485.760: 52.9779% ( 27) 00:09:17.189 10485.760 - 10536.172: 53.2353% ( 28) 00:09:17.189 10536.172 - 10586.585: 53.4375% ( 22) 00:09:17.189 10586.585 - 10636.997: 53.6765% ( 26) 00:09:17.189 10636.997 - 10687.409: 53.8879% ( 23) 00:09:17.189 10687.409 - 10737.822: 54.1085% ( 24) 00:09:17.189 10737.822 - 10788.234: 54.2923% ( 20) 00:09:17.189 10788.234 - 10838.646: 54.4945% ( 22) 00:09:17.189 10838.646 - 10889.058: 54.6415% ( 16) 00:09:17.189 10889.058 - 10939.471: 54.8070% ( 18) 00:09:17.189 10939.471 - 10989.883: 54.9540% ( 16) 00:09:17.189 10989.883 - 11040.295: 55.1379% ( 20) 00:09:17.189 11040.295 - 11090.708: 55.2757% ( 15) 00:09:17.189 11090.708 - 11141.120: 55.4412% ( 18) 00:09:17.189 11141.120 - 11191.532: 55.5699% ( 14) 00:09:17.189 11191.532 - 11241.945: 55.6893% ( 13) 00:09:17.189 11241.945 - 11292.357: 55.8088% ( 13) 00:09:17.189 11292.357 - 11342.769: 55.9283% ( 13) 00:09:17.189 11342.769 - 11393.182: 56.0478% ( 13) 00:09:17.189 11393.182 - 11443.594: 56.1673% ( 13) 00:09:17.189 11443.594 - 11494.006: 56.2684% ( 11) 00:09:17.189 11494.006 - 11544.418: 56.3787% ( 12) 00:09:17.189 11544.418 - 11594.831: 56.4982% ( 13) 00:09:17.189 11594.831 - 11645.243: 56.6176% ( 13) 00:09:17.189 11645.243 - 11695.655: 56.7096% ( 10) 00:09:17.189 11695.655 - 11746.068: 56.8658% ( 17) 00:09:17.189 11746.068 - 11796.480: 56.9393% ( 8) 00:09:17.189 11796.480 - 11846.892: 57.0864% ( 16) 00:09:17.189 11846.892 - 11897.305: 57.1599% ( 8) 00:09:17.189 11897.305 - 11947.717: 57.2426% ( 9) 00:09:17.189 11947.717 - 11998.129: 57.3346% ( 10) 00:09:17.189 11998.129 - 12048.542: 57.4081% ( 8) 00:09:17.189 12048.542 - 12098.954: 57.4816% ( 8) 00:09:17.189 12098.954 - 12149.366: 57.5643% ( 9) 00:09:17.189 12149.366 - 12199.778: 57.6379% ( 8) 00:09:17.189 12199.778 - 12250.191: 57.7390% ( 11) 00:09:17.189 12250.191 - 12300.603: 57.8493% ( 12) 00:09:17.189 12300.603 - 12351.015: 57.9412% ( 10) 00:09:17.189 12351.015 - 12401.428: 58.0423% ( 11) 00:09:17.189 12401.428 - 12451.840: 58.1710% ( 14) 00:09:17.189 12451.840 - 12502.252: 58.3180% ( 16) 00:09:17.189 12502.252 - 12552.665: 58.4007% ( 9) 00:09:17.189 12552.665 - 12603.077: 58.5202% ( 13) 00:09:17.189 12603.077 - 12653.489: 58.6581% ( 15) 00:09:17.189 12653.489 - 12703.902: 58.8511% ( 21) 00:09:17.189 12703.902 - 12754.314: 58.9522% ( 11) 00:09:17.189 12754.314 - 12804.726: 59.1452% ( 21) 00:09:17.189 12804.726 - 12855.138: 59.3382% ( 21) 00:09:17.189 12855.138 - 12905.551: 59.4853% ( 16) 00:09:17.189 12905.551 - 13006.375: 59.8254% ( 37) 00:09:17.189 13006.375 - 13107.200: 60.2298% ( 44) 00:09:17.189 13107.200 - 13208.025: 60.7077% ( 52) 00:09:17.189 13208.025 - 13308.849: 61.2408% ( 58) 00:09:17.189 13308.849 - 13409.674: 61.8199% ( 63) 00:09:17.189 13409.674 - 13510.498: 62.4449% ( 68) 00:09:17.189 13510.498 - 13611.323: 63.1618% ( 78) 00:09:17.189 13611.323 - 13712.148: 63.8235% ( 72) 00:09:17.189 13712.148 - 13812.972: 64.4761% ( 71) 00:09:17.189 13812.972 - 13913.797: 65.2022% ( 79) 00:09:17.189 13913.797 - 14014.622: 65.8548% ( 71) 00:09:17.189 14014.622 - 14115.446: 66.7647% ( 99) 00:09:17.189 14115.446 - 14216.271: 67.6287% ( 94) 00:09:17.189 14216.271 - 14317.095: 68.5110% ( 96) 00:09:17.189 14317.095 - 14417.920: 69.5221% ( 110) 00:09:17.189 14417.920 - 14518.745: 70.4504% ( 101) 00:09:17.189 14518.745 - 14619.569: 71.4706% ( 111) 00:09:17.189 14619.569 - 14720.394: 72.5551% ( 118) 00:09:17.189 14720.394 - 14821.218: 73.4283% ( 95) 00:09:17.189 14821.218 - 14922.043: 74.3382% ( 99) 00:09:17.189 14922.043 - 15022.868: 75.5423% ( 131) 00:09:17.189 15022.868 - 15123.692: 76.3695% ( 90) 00:09:17.189 15123.692 - 15224.517: 77.3162% ( 103) 00:09:17.189 15224.517 - 15325.342: 78.2353% ( 100) 00:09:17.189 15325.342 - 15426.166: 79.0901% ( 93) 00:09:17.189 15426.166 - 15526.991: 80.1103% ( 111) 00:09:17.189 15526.991 - 15627.815: 81.1213% ( 110) 00:09:17.189 15627.815 - 15728.640: 81.8290% ( 77) 00:09:17.189 15728.640 - 15829.465: 82.5276% ( 76) 00:09:17.189 15829.465 - 15930.289: 83.1801% ( 71) 00:09:17.189 15930.289 - 16031.114: 83.7868% ( 66) 00:09:17.189 16031.114 - 16131.938: 84.3474% ( 61) 00:09:17.189 16131.938 - 16232.763: 84.9540% ( 66) 00:09:17.189 16232.763 - 16333.588: 85.7629% ( 88) 00:09:17.189 16333.588 - 16434.412: 86.3695% ( 66) 00:09:17.189 16434.412 - 16535.237: 86.9853% ( 67) 00:09:17.190 16535.237 - 16636.062: 87.4908% ( 55) 00:09:17.190 16636.062 - 16736.886: 88.1434% ( 71) 00:09:17.190 16736.886 - 16837.711: 88.6213% ( 52) 00:09:17.190 16837.711 - 16938.535: 89.0441% ( 46) 00:09:17.190 16938.535 - 17039.360: 89.4577% ( 45) 00:09:17.190 17039.360 - 17140.185: 89.8438% ( 42) 00:09:17.190 17140.185 - 17241.009: 90.2022% ( 39) 00:09:17.190 17241.009 - 17341.834: 90.5882% ( 42) 00:09:17.190 17341.834 - 17442.658: 91.0570% ( 51) 00:09:17.190 17442.658 - 17543.483: 91.3879% ( 36) 00:09:17.190 17543.483 - 17644.308: 91.7004% ( 34) 00:09:17.190 17644.308 - 17745.132: 91.9945% ( 32) 00:09:17.190 17745.132 - 17845.957: 92.2978% ( 33) 00:09:17.190 17845.957 - 17946.782: 92.5643% ( 29) 00:09:17.190 17946.782 - 18047.606: 92.8768% ( 34) 00:09:17.190 18047.606 - 18148.431: 93.1710% ( 32) 00:09:17.190 18148.431 - 18249.255: 93.4099% ( 26) 00:09:17.190 18249.255 - 18350.080: 93.7316% ( 35) 00:09:17.190 18350.080 - 18450.905: 93.9614% ( 25) 00:09:17.190 18450.905 - 18551.729: 94.2279% ( 29) 00:09:17.190 18551.729 - 18652.554: 94.4853% ( 28) 00:09:17.190 18652.554 - 18753.378: 94.7243% ( 26) 00:09:17.190 18753.378 - 18854.203: 95.0276% ( 33) 00:09:17.190 18854.203 - 18955.028: 95.2941% ( 29) 00:09:17.190 18955.028 - 19055.852: 95.5055% ( 23) 00:09:17.190 19055.852 - 19156.677: 95.6893% ( 20) 00:09:17.190 19156.677 - 19257.502: 95.8732% ( 20) 00:09:17.190 19257.502 - 19358.326: 96.0478% ( 19) 00:09:17.190 19358.326 - 19459.151: 96.2224% ( 19) 00:09:17.190 19459.151 - 19559.975: 96.3603% ( 15) 00:09:17.190 19559.975 - 19660.800: 96.5533% ( 21) 00:09:17.190 19660.800 - 19761.625: 96.7004% ( 16) 00:09:17.190 19761.625 - 19862.449: 96.8566% ( 17) 00:09:17.190 19862.449 - 19963.274: 97.0221% ( 18) 00:09:17.190 19963.274 - 20064.098: 97.1783% ( 17) 00:09:17.190 20064.098 - 20164.923: 97.3438% ( 18) 00:09:17.190 20164.923 - 20265.748: 97.4908% ( 16) 00:09:17.190 20265.748 - 20366.572: 97.6287% ( 15) 00:09:17.190 20366.572 - 20467.397: 97.7574% ( 14) 00:09:17.190 20467.397 - 20568.222: 97.9228% ( 18) 00:09:17.190 20568.222 - 20669.046: 98.0882% ( 18) 00:09:17.190 20669.046 - 20769.871: 98.1893% ( 11) 00:09:17.190 20769.871 - 20870.695: 98.2629% ( 8) 00:09:17.190 20870.695 - 20971.520: 98.3732% ( 12) 00:09:17.190 20971.520 - 21072.345: 98.4743% ( 11) 00:09:17.190 21072.345 - 21173.169: 98.5754% ( 11) 00:09:17.190 21173.169 - 21273.994: 98.6305% ( 6) 00:09:17.190 21273.994 - 21374.818: 98.7408% ( 12) 00:09:17.190 21374.818 - 21475.643: 98.8143% ( 8) 00:09:17.190 21475.643 - 21576.468: 98.8235% ( 1) 00:09:17.190 29440.788 - 29642.437: 98.8695% ( 5) 00:09:17.190 29642.437 - 29844.086: 98.9706% ( 11) 00:09:17.190 29844.086 - 30045.735: 99.0625% ( 10) 00:09:17.190 30045.735 - 30247.385: 99.1176% ( 6) 00:09:17.190 30247.385 - 30449.034: 99.2371% ( 13) 00:09:17.190 30449.034 - 30650.683: 99.3382% ( 11) 00:09:17.190 30650.683 - 30852.332: 99.4393% ( 11) 00:09:17.190 30852.332 - 31053.982: 99.4945% ( 6) 00:09:17.190 31053.982 - 31255.631: 99.6140% ( 13) 00:09:17.190 31255.631 - 31457.280: 99.6875% ( 8) 00:09:17.190 31457.280 - 31658.929: 99.7886% ( 11) 00:09:17.190 31658.929 - 31860.578: 99.8713% ( 9) 00:09:17.190 31860.578 - 32062.228: 99.9540% ( 9) 00:09:17.190 32062.228 - 32263.877: 100.0000% ( 5) 00:09:17.190 00:09:17.190 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:17.190 ============================================================================== 00:09:17.190 Range in us Cumulative IO count 00:09:17.190 7007.311 - 7057.723: 0.0919% ( 10) 00:09:17.190 7057.723 - 7108.135: 0.1930% ( 11) 00:09:17.190 7108.135 - 7158.548: 0.3217% ( 14) 00:09:17.190 7158.548 - 7208.960: 0.4412% ( 13) 00:09:17.190 7208.960 - 7259.372: 0.5882% ( 16) 00:09:17.190 7259.372 - 7309.785: 0.8272% ( 26) 00:09:17.190 7309.785 - 7360.197: 1.2132% ( 42) 00:09:17.190 7360.197 - 7410.609: 1.7279% ( 56) 00:09:17.190 7410.609 - 7461.022: 2.5460% ( 89) 00:09:17.190 7461.022 - 7511.434: 3.4926% ( 103) 00:09:17.190 7511.434 - 7561.846: 4.3842% ( 97) 00:09:17.190 7561.846 - 7612.258: 5.3125% ( 101) 00:09:17.190 7612.258 - 7662.671: 6.3235% ( 110) 00:09:17.190 7662.671 - 7713.083: 7.4724% ( 125) 00:09:17.190 7713.083 - 7763.495: 8.7224% ( 136) 00:09:17.190 7763.495 - 7813.908: 10.0092% ( 140) 00:09:17.190 7813.908 - 7864.320: 11.3327% ( 144) 00:09:17.190 7864.320 - 7914.732: 12.7298% ( 152) 00:09:17.190 7914.732 - 7965.145: 14.0533% ( 144) 00:09:17.190 7965.145 - 8015.557: 15.4504% ( 152) 00:09:17.190 8015.557 - 8065.969: 16.7831% ( 145) 00:09:17.190 8065.969 - 8116.382: 18.1801% ( 152) 00:09:17.190 8116.382 - 8166.794: 19.5312% ( 147) 00:09:17.190 8166.794 - 8217.206: 20.9191% ( 151) 00:09:17.190 8217.206 - 8267.618: 22.3438% ( 155) 00:09:17.190 8267.618 - 8318.031: 23.7868% ( 157) 00:09:17.190 8318.031 - 8368.443: 25.1930% ( 153) 00:09:17.190 8368.443 - 8418.855: 26.5441% ( 147) 00:09:17.190 8418.855 - 8469.268: 27.8952% ( 147) 00:09:17.190 8469.268 - 8519.680: 29.2463% ( 147) 00:09:17.190 8519.680 - 8570.092: 30.6250% ( 150) 00:09:17.190 8570.092 - 8620.505: 31.9485% ( 144) 00:09:17.190 8620.505 - 8670.917: 33.3364% ( 151) 00:09:17.190 8670.917 - 8721.329: 34.7059% ( 149) 00:09:17.190 8721.329 - 8771.742: 36.0294% ( 144) 00:09:17.190 8771.742 - 8822.154: 37.4265% ( 152) 00:09:17.190 8822.154 - 8872.566: 38.7040% ( 139) 00:09:17.190 8872.566 - 8922.978: 39.9632% ( 137) 00:09:17.190 8922.978 - 8973.391: 41.2040% ( 135) 00:09:17.190 8973.391 - 9023.803: 42.3438% ( 124) 00:09:17.190 9023.803 - 9074.215: 43.4283% ( 118) 00:09:17.190 9074.215 - 9124.628: 44.3474% ( 100) 00:09:17.190 9124.628 - 9175.040: 45.1562% ( 88) 00:09:17.190 9175.040 - 9225.452: 45.9467% ( 86) 00:09:17.190 9225.452 - 9275.865: 46.5901% ( 70) 00:09:17.190 9275.865 - 9326.277: 47.1783% ( 64) 00:09:17.190 9326.277 - 9376.689: 47.6838% ( 55) 00:09:17.190 9376.689 - 9427.102: 48.1434% ( 50) 00:09:17.190 9427.102 - 9477.514: 48.4835% ( 37) 00:09:17.190 9477.514 - 9527.926: 48.7960% ( 34) 00:09:17.190 9527.926 - 9578.338: 49.0625% ( 29) 00:09:17.190 9578.338 - 9628.751: 49.2739% ( 23) 00:09:17.190 9628.751 - 9679.163: 49.5404% ( 29) 00:09:17.190 9679.163 - 9729.575: 49.7794% ( 26) 00:09:17.190 9729.575 - 9779.988: 50.0276% ( 27) 00:09:17.190 9779.988 - 9830.400: 50.2849% ( 28) 00:09:17.190 9830.400 - 9880.812: 50.5331% ( 27) 00:09:17.190 9880.812 - 9931.225: 50.7812% ( 27) 00:09:17.190 9931.225 - 9981.637: 51.0202% ( 26) 00:09:17.190 9981.637 - 10032.049: 51.2224% ( 22) 00:09:17.190 10032.049 - 10082.462: 51.4338% ( 23) 00:09:17.190 10082.462 - 10132.874: 51.6452% ( 23) 00:09:17.190 10132.874 - 10183.286: 51.8842% ( 26) 00:09:17.190 10183.286 - 10233.698: 52.1415% ( 28) 00:09:17.190 10233.698 - 10284.111: 52.3897% ( 27) 00:09:17.190 10284.111 - 10334.523: 52.6287% ( 26) 00:09:17.190 10334.523 - 10384.935: 52.8768% ( 27) 00:09:17.190 10384.935 - 10435.348: 53.1158% ( 26) 00:09:17.190 10435.348 - 10485.760: 53.3732% ( 28) 00:09:17.190 10485.760 - 10536.172: 53.6029% ( 25) 00:09:17.190 10536.172 - 10586.585: 53.8511% ( 27) 00:09:17.190 10586.585 - 10636.997: 54.1452% ( 32) 00:09:17.190 10636.997 - 10687.409: 54.4118% ( 29) 00:09:17.190 10687.409 - 10737.822: 54.7059% ( 32) 00:09:17.190 10737.822 - 10788.234: 54.9540% ( 27) 00:09:17.190 10788.234 - 10838.646: 55.2114% ( 28) 00:09:17.190 10838.646 - 10889.058: 55.3952% ( 20) 00:09:17.190 10889.058 - 10939.471: 55.5515% ( 17) 00:09:17.190 10939.471 - 10989.883: 55.6710% ( 13) 00:09:17.190 10989.883 - 11040.295: 55.7537% ( 9) 00:09:17.190 11040.295 - 11090.708: 55.8364% ( 9) 00:09:17.190 11090.708 - 11141.120: 55.9283% ( 10) 00:09:17.190 11141.120 - 11191.532: 56.0110% ( 9) 00:09:17.190 11191.532 - 11241.945: 56.0846% ( 8) 00:09:17.190 11241.945 - 11292.357: 56.1765% ( 10) 00:09:17.190 11292.357 - 11342.769: 56.2684% ( 10) 00:09:17.190 11342.769 - 11393.182: 56.3419% ( 8) 00:09:17.190 11393.182 - 11443.594: 56.4430% ( 11) 00:09:17.190 11443.594 - 11494.006: 56.5349% ( 10) 00:09:17.190 11494.006 - 11544.418: 56.6176% ( 9) 00:09:17.190 11544.418 - 11594.831: 56.6912% ( 8) 00:09:17.190 11594.831 - 11645.243: 56.7555% ( 7) 00:09:17.190 11645.243 - 11695.655: 56.8015% ( 5) 00:09:17.190 11695.655 - 11746.068: 56.8566% ( 6) 00:09:17.190 11746.068 - 11796.480: 56.9210% ( 7) 00:09:17.190 11796.480 - 11846.892: 56.9853% ( 7) 00:09:17.190 11846.892 - 11897.305: 57.0404% ( 6) 00:09:17.190 11897.305 - 11947.717: 57.0956% ( 6) 00:09:17.190 11947.717 - 11998.129: 57.1599% ( 7) 00:09:17.190 11998.129 - 12048.542: 57.2243% ( 7) 00:09:17.190 12048.542 - 12098.954: 57.2794% ( 6) 00:09:17.190 12098.954 - 12149.366: 57.3713% ( 10) 00:09:17.190 12149.366 - 12199.778: 57.4540% ( 9) 00:09:17.190 12199.778 - 12250.191: 57.5276% ( 8) 00:09:17.190 12250.191 - 12300.603: 57.6011% ( 8) 00:09:17.190 12300.603 - 12351.015: 57.7482% ( 16) 00:09:17.190 12351.015 - 12401.428: 57.8493% ( 11) 00:09:17.190 12401.428 - 12451.840: 57.9320% ( 9) 00:09:17.190 12451.840 - 12502.252: 58.0147% ( 9) 00:09:17.190 12502.252 - 12552.665: 58.1066% ( 10) 00:09:17.190 12552.665 - 12603.077: 58.1985% ( 10) 00:09:17.190 12603.077 - 12653.489: 58.2629% ( 7) 00:09:17.190 12653.489 - 12703.902: 58.3640% ( 11) 00:09:17.190 12703.902 - 12754.314: 58.4651% ( 11) 00:09:17.190 12754.314 - 12804.726: 58.5662% ( 11) 00:09:17.190 12804.726 - 12855.138: 58.6581% ( 10) 00:09:17.190 12855.138 - 12905.551: 58.7776% ( 13) 00:09:17.190 12905.551 - 13006.375: 59.0165% ( 26) 00:09:17.190 13006.375 - 13107.200: 59.3474% ( 36) 00:09:17.190 13107.200 - 13208.025: 59.6783% ( 36) 00:09:17.190 13208.025 - 13308.849: 60.0735% ( 43) 00:09:17.190 13308.849 - 13409.674: 60.5699% ( 54) 00:09:17.190 13409.674 - 13510.498: 61.2132% ( 70) 00:09:17.190 13510.498 - 13611.323: 61.8199% ( 66) 00:09:17.190 13611.323 - 13712.148: 62.3989% ( 63) 00:09:17.190 13712.148 - 13812.972: 62.9779% ( 63) 00:09:17.190 13812.972 - 13913.797: 63.6305% ( 71) 00:09:17.190 13913.797 - 14014.622: 64.3290% ( 76) 00:09:17.190 14014.622 - 14115.446: 65.0184% ( 75) 00:09:17.190 14115.446 - 14216.271: 65.7996% ( 85) 00:09:17.190 14216.271 - 14317.095: 66.6544% ( 93) 00:09:17.190 14317.095 - 14417.920: 67.5735% ( 100) 00:09:17.190 14417.920 - 14518.745: 68.4191% ( 92) 00:09:17.190 14518.745 - 14619.569: 69.4393% ( 111) 00:09:17.190 14619.569 - 14720.394: 70.4779% ( 113) 00:09:17.190 14720.394 - 14821.218: 71.5625% ( 118) 00:09:17.190 14821.218 - 14922.043: 72.6654% ( 120) 00:09:17.190 14922.043 - 15022.868: 73.8051% ( 124) 00:09:17.190 15022.868 - 15123.692: 74.9449% ( 124) 00:09:17.190 15123.692 - 15224.517: 76.0938% ( 125) 00:09:17.190 15224.517 - 15325.342: 77.2335% ( 124) 00:09:17.190 15325.342 - 15426.166: 78.2904% ( 115) 00:09:17.190 15426.166 - 15526.991: 79.3842% ( 119) 00:09:17.190 15526.991 - 15627.815: 80.5147% ( 123) 00:09:17.190 15627.815 - 15728.640: 81.5074% ( 108) 00:09:17.190 15728.640 - 15829.465: 82.4632% ( 104) 00:09:17.190 15829.465 - 15930.289: 83.4007% ( 102) 00:09:17.190 15930.289 - 16031.114: 84.3750% ( 106) 00:09:17.190 16031.114 - 16131.938: 85.3125% ( 102) 00:09:17.190 16131.938 - 16232.763: 86.1765% ( 94) 00:09:17.190 16232.763 - 16333.588: 86.8474% ( 73) 00:09:17.190 16333.588 - 16434.412: 87.5643% ( 78) 00:09:17.190 16434.412 - 16535.237: 88.1893% ( 68) 00:09:17.190 16535.237 - 16636.062: 88.7592% ( 62) 00:09:17.190 16636.062 - 16736.886: 89.2923% ( 58) 00:09:17.190 16736.886 - 16837.711: 89.8621% ( 62) 00:09:17.190 16837.711 - 16938.535: 90.3033% ( 48) 00:09:17.190 16938.535 - 17039.360: 90.7261% ( 46) 00:09:17.190 17039.360 - 17140.185: 91.1305% ( 44) 00:09:17.190 17140.185 - 17241.009: 91.4522% ( 35) 00:09:17.190 17241.009 - 17341.834: 91.6544% ( 22) 00:09:17.190 17341.834 - 17442.658: 91.8658% ( 23) 00:09:17.190 17442.658 - 17543.483: 92.1048% ( 26) 00:09:17.190 17543.483 - 17644.308: 92.3805% ( 30) 00:09:17.190 17644.308 - 17745.132: 92.6379% ( 28) 00:09:17.190 17745.132 - 17845.957: 92.9504% ( 34) 00:09:17.190 17845.957 - 17946.782: 93.2169% ( 29) 00:09:17.190 17946.782 - 18047.606: 93.4926% ( 30) 00:09:17.190 18047.606 - 18148.431: 93.7500% ( 28) 00:09:17.190 18148.431 - 18249.255: 94.0074% ( 28) 00:09:17.190 18249.255 - 18350.080: 94.2555% ( 27) 00:09:17.190 18350.080 - 18450.905: 94.5496% ( 32) 00:09:17.190 18450.905 - 18551.729: 94.8346% ( 31) 00:09:17.190 18551.729 - 18652.554: 95.1562% ( 35) 00:09:17.190 18652.554 - 18753.378: 95.4412% ( 31) 00:09:17.190 18753.378 - 18854.203: 95.7169% ( 30) 00:09:17.190 18854.203 - 18955.028: 96.0018% ( 31) 00:09:17.190 18955.028 - 19055.852: 96.2408% ( 26) 00:09:17.190 19055.852 - 19156.677: 96.4982% ( 28) 00:09:17.190 19156.677 - 19257.502: 96.7188% ( 24) 00:09:17.190 19257.502 - 19358.326: 96.9301% ( 23) 00:09:17.190 19358.326 - 19459.151: 97.1140% ( 20) 00:09:17.190 19459.151 - 19559.975: 97.2978% ( 20) 00:09:17.190 19559.975 - 19660.800: 97.4908% ( 21) 00:09:17.190 19660.800 - 19761.625: 97.6562% ( 18) 00:09:17.190 19761.625 - 19862.449: 97.8309% ( 19) 00:09:17.190 19862.449 - 19963.274: 98.0055% ( 19) 00:09:17.190 19963.274 - 20064.098: 98.1342% ( 14) 00:09:17.190 20064.098 - 20164.923: 98.2537% ( 13) 00:09:17.191 20164.923 - 20265.748: 98.3272% ( 8) 00:09:17.191 20265.748 - 20366.572: 98.4099% ( 9) 00:09:17.191 20366.572 - 20467.397: 98.4835% ( 8) 00:09:17.191 20467.397 - 20568.222: 98.5662% ( 9) 00:09:17.191 20568.222 - 20669.046: 98.6121% ( 5) 00:09:17.191 20669.046 - 20769.871: 98.6673% ( 6) 00:09:17.191 20769.871 - 20870.695: 98.7040% ( 4) 00:09:17.191 20870.695 - 20971.520: 98.7500% ( 5) 00:09:17.191 20971.520 - 21072.345: 98.8051% ( 6) 00:09:17.191 21072.345 - 21173.169: 98.8235% ( 2) 00:09:17.191 30045.735 - 30247.385: 98.8419% ( 2) 00:09:17.191 30247.385 - 30449.034: 98.9338% ( 10) 00:09:17.191 30449.034 - 30650.683: 99.0257% ( 10) 00:09:17.191 30650.683 - 30852.332: 99.1268% ( 11) 00:09:17.191 30852.332 - 31053.982: 99.2279% ( 11) 00:09:17.191 31053.982 - 31255.631: 99.3199% ( 10) 00:09:17.191 31255.631 - 31457.280: 99.4210% ( 11) 00:09:17.191 31457.280 - 31658.929: 99.5129% ( 10) 00:09:17.191 31658.929 - 31860.578: 99.6232% ( 12) 00:09:17.191 31860.578 - 32062.228: 99.7243% ( 11) 00:09:17.191 32062.228 - 32263.877: 99.8346% ( 12) 00:09:17.191 32263.877 - 32465.526: 99.9449% ( 12) 00:09:17.191 32465.526 - 32667.175: 100.0000% ( 6) 00:09:17.191 00:09:17.191 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:17.191 ============================================================================== 00:09:17.191 Range in us Cumulative IO count 00:09:17.191 5343.705 - 5368.911: 0.0184% ( 2) 00:09:17.191 5368.911 - 5394.117: 0.0368% ( 2) 00:09:17.191 5394.117 - 5419.323: 0.0551% ( 2) 00:09:17.191 5419.323 - 5444.529: 0.0827% ( 3) 00:09:17.191 5444.529 - 5469.735: 0.1011% ( 2) 00:09:17.191 5469.735 - 5494.942: 0.1195% ( 2) 00:09:17.191 5494.942 - 5520.148: 0.1471% ( 3) 00:09:17.191 5520.148 - 5545.354: 0.1654% ( 2) 00:09:17.191 5545.354 - 5570.560: 0.1930% ( 3) 00:09:17.191 5570.560 - 5595.766: 0.2114% ( 2) 00:09:17.191 5595.766 - 5620.972: 0.2298% ( 2) 00:09:17.191 5620.972 - 5646.178: 0.2574% ( 3) 00:09:17.191 5646.178 - 5671.385: 0.2757% ( 2) 00:09:17.191 5671.385 - 5696.591: 0.3033% ( 3) 00:09:17.191 5696.591 - 5721.797: 0.3217% ( 2) 00:09:17.191 5721.797 - 5747.003: 0.3493% ( 3) 00:09:17.191 5747.003 - 5772.209: 0.3676% ( 2) 00:09:17.191 5772.209 - 5797.415: 0.3860% ( 2) 00:09:17.191 5797.415 - 5822.622: 0.4136% ( 3) 00:09:17.191 5822.622 - 5847.828: 0.4320% ( 2) 00:09:17.191 5847.828 - 5873.034: 0.4504% ( 2) 00:09:17.191 5873.034 - 5898.240: 0.4688% ( 2) 00:09:17.191 5898.240 - 5923.446: 0.4963% ( 3) 00:09:17.191 5923.446 - 5948.652: 0.5147% ( 2) 00:09:17.191 5948.652 - 5973.858: 0.5331% ( 2) 00:09:17.191 5973.858 - 5999.065: 0.5607% ( 3) 00:09:17.191 5999.065 - 6024.271: 0.5790% ( 2) 00:09:17.191 6024.271 - 6049.477: 0.6066% ( 3) 00:09:17.191 6049.477 - 6074.683: 0.6250% ( 2) 00:09:17.191 6074.683 - 6099.889: 0.6434% ( 2) 00:09:17.191 6099.889 - 6125.095: 0.6710% ( 3) 00:09:17.191 6125.095 - 6150.302: 0.6893% ( 2) 00:09:17.191 6150.302 - 6175.508: 0.7169% ( 3) 00:09:17.191 6175.508 - 6200.714: 0.7353% ( 2) 00:09:17.191 6200.714 - 6225.920: 0.7537% ( 2) 00:09:17.191 6225.920 - 6251.126: 0.7812% ( 3) 00:09:17.191 6251.126 - 6276.332: 0.7904% ( 1) 00:09:17.191 6276.332 - 6301.538: 0.8180% ( 3) 00:09:17.191 6301.538 - 6326.745: 0.8364% ( 2) 00:09:17.191 6326.745 - 6351.951: 0.8548% ( 2) 00:09:17.191 6351.951 - 6377.157: 0.8824% ( 3) 00:09:17.191 6377.157 - 6402.363: 0.9007% ( 2) 00:09:17.191 6402.363 - 6427.569: 0.9191% ( 2) 00:09:17.191 6427.569 - 6452.775: 0.9375% ( 2) 00:09:17.191 6452.775 - 6503.188: 0.9835% ( 5) 00:09:17.191 6503.188 - 6553.600: 1.0294% ( 5) 00:09:17.191 6553.600 - 6604.012: 1.0754% ( 5) 00:09:17.191 6604.012 - 6654.425: 1.1213% ( 5) 00:09:17.191 6654.425 - 6704.837: 1.1581% ( 4) 00:09:17.191 6704.837 - 6755.249: 1.1765% ( 2) 00:09:17.191 6956.898 - 7007.311: 1.2592% ( 9) 00:09:17.191 7007.311 - 7057.723: 1.3419% ( 9) 00:09:17.191 7057.723 - 7108.135: 1.4154% ( 8) 00:09:17.191 7108.135 - 7158.548: 1.4982% ( 9) 00:09:17.191 7158.548 - 7208.960: 1.5901% ( 10) 00:09:17.191 7208.960 - 7259.372: 1.7371% ( 16) 00:09:17.191 7259.372 - 7309.785: 2.1048% ( 40) 00:09:17.191 7309.785 - 7360.197: 2.4632% ( 39) 00:09:17.191 7360.197 - 7410.609: 2.9044% ( 48) 00:09:17.191 7410.609 - 7461.022: 3.5662% ( 72) 00:09:17.191 7461.022 - 7511.434: 4.4301% ( 94) 00:09:17.191 7511.434 - 7561.846: 5.3768% ( 103) 00:09:17.191 7561.846 - 7612.258: 6.4982% ( 122) 00:09:17.191 7612.258 - 7662.671: 7.5643% ( 116) 00:09:17.191 7662.671 - 7713.083: 8.7592% ( 130) 00:09:17.191 7713.083 - 7763.495: 9.8897% ( 123) 00:09:17.191 7763.495 - 7813.908: 11.1121% ( 133) 00:09:17.191 7813.908 - 7864.320: 12.4265% ( 143) 00:09:17.191 7864.320 - 7914.732: 13.7684% ( 146) 00:09:17.191 7914.732 - 7965.145: 15.1654% ( 152) 00:09:17.191 7965.145 - 8015.557: 16.4890% ( 144) 00:09:17.191 8015.557 - 8065.969: 17.8493% ( 148) 00:09:17.191 8065.969 - 8116.382: 19.1636% ( 143) 00:09:17.191 8116.382 - 8166.794: 20.5423% ( 150) 00:09:17.191 8166.794 - 8217.206: 21.8290% ( 140) 00:09:17.191 8217.206 - 8267.618: 23.2353% ( 153) 00:09:17.191 8267.618 - 8318.031: 24.5680% ( 145) 00:09:17.191 8318.031 - 8368.443: 25.9375% ( 149) 00:09:17.191 8368.443 - 8418.855: 27.3621% ( 155) 00:09:17.191 8418.855 - 8469.268: 28.7224% ( 148) 00:09:17.191 8469.268 - 8519.680: 30.1011% ( 150) 00:09:17.191 8519.680 - 8570.092: 31.4614% ( 148) 00:09:17.191 8570.092 - 8620.505: 32.8125% ( 147) 00:09:17.191 8620.505 - 8670.917: 34.1452% ( 145) 00:09:17.191 8670.917 - 8721.329: 35.4044% ( 137) 00:09:17.191 8721.329 - 8771.742: 36.7188% ( 143) 00:09:17.191 8771.742 - 8822.154: 38.0055% ( 140) 00:09:17.191 8822.154 - 8872.566: 39.3015% ( 141) 00:09:17.191 8872.566 - 8922.978: 40.6618% ( 148) 00:09:17.191 8922.978 - 8973.391: 41.9026% ( 135) 00:09:17.191 8973.391 - 9023.803: 43.0882% ( 129) 00:09:17.191 9023.803 - 9074.215: 44.1268% ( 113) 00:09:17.191 9074.215 - 9124.628: 45.0368% ( 99) 00:09:17.191 9124.628 - 9175.040: 45.8456% ( 88) 00:09:17.191 9175.040 - 9225.452: 46.5074% ( 72) 00:09:17.191 9225.452 - 9275.865: 47.1324% ( 68) 00:09:17.191 9275.865 - 9326.277: 47.7114% ( 63) 00:09:17.191 9326.277 - 9376.689: 48.1158% ( 44) 00:09:17.191 9376.689 - 9427.102: 48.4743% ( 39) 00:09:17.191 9427.102 - 9477.514: 48.8327% ( 39) 00:09:17.191 9477.514 - 9527.926: 49.1085% ( 30) 00:09:17.191 9527.926 - 9578.338: 49.3842% ( 30) 00:09:17.191 9578.338 - 9628.751: 49.6415% ( 28) 00:09:17.191 9628.751 - 9679.163: 49.8713% ( 25) 00:09:17.191 9679.163 - 9729.575: 50.1011% ( 25) 00:09:17.191 9729.575 - 9779.988: 50.3217% ( 24) 00:09:17.191 9779.988 - 9830.400: 50.5423% ( 24) 00:09:17.191 9830.400 - 9880.812: 50.7169% ( 19) 00:09:17.191 9880.812 - 9931.225: 50.9007% ( 20) 00:09:17.191 9931.225 - 9981.637: 51.0938% ( 21) 00:09:17.191 9981.637 - 10032.049: 51.2868% ( 21) 00:09:17.191 10032.049 - 10082.462: 51.4522% ( 18) 00:09:17.191 10082.462 - 10132.874: 51.6544% ( 22) 00:09:17.191 10132.874 - 10183.286: 51.8382% ( 20) 00:09:17.191 10183.286 - 10233.698: 52.0404% ( 22) 00:09:17.191 10233.698 - 10284.111: 52.2426% ( 22) 00:09:17.191 10284.111 - 10334.523: 52.4632% ( 24) 00:09:17.191 10334.523 - 10384.935: 52.6562% ( 21) 00:09:17.191 10384.935 - 10435.348: 52.8768% ( 24) 00:09:17.191 10435.348 - 10485.760: 53.1158% ( 26) 00:09:17.191 10485.760 - 10536.172: 53.3272% ( 23) 00:09:17.191 10536.172 - 10586.585: 53.5846% ( 28) 00:09:17.191 10586.585 - 10636.997: 53.8051% ( 24) 00:09:17.191 10636.997 - 10687.409: 54.0349% ( 25) 00:09:17.191 10687.409 - 10737.822: 54.2279% ( 21) 00:09:17.191 10737.822 - 10788.234: 54.4393% ( 23) 00:09:17.191 10788.234 - 10838.646: 54.6324% ( 21) 00:09:17.191 10838.646 - 10889.058: 54.8070% ( 19) 00:09:17.191 10889.058 - 10939.471: 54.9724% ( 18) 00:09:17.191 10939.471 - 10989.883: 55.1746% ( 22) 00:09:17.191 10989.883 - 11040.295: 55.3401% ( 18) 00:09:17.191 11040.295 - 11090.708: 55.4779% ( 15) 00:09:17.191 11090.708 - 11141.120: 55.6158% ( 15) 00:09:17.191 11141.120 - 11191.532: 55.7629% ( 16) 00:09:17.191 11191.532 - 11241.945: 55.8824% ( 13) 00:09:17.191 11241.945 - 11292.357: 55.9651% ( 9) 00:09:17.191 11292.357 - 11342.769: 56.0478% ( 9) 00:09:17.191 11342.769 - 11393.182: 56.1489% ( 11) 00:09:17.191 11393.182 - 11443.594: 56.2500% ( 11) 00:09:17.191 11443.594 - 11494.006: 56.3327% ( 9) 00:09:17.191 11494.006 - 11544.418: 56.4154% ( 9) 00:09:17.191 11544.418 - 11594.831: 56.5165% ( 11) 00:09:17.191 11594.831 - 11645.243: 56.6636% ( 16) 00:09:17.191 11645.243 - 11695.655: 56.8566% ( 21) 00:09:17.191 11695.655 - 11746.068: 56.9669% ( 12) 00:09:17.191 11746.068 - 11796.480: 57.0772% ( 12) 00:09:17.191 11796.480 - 11846.892: 57.1599% ( 9) 00:09:17.191 11846.892 - 11897.305: 57.2794% ( 13) 00:09:17.191 11897.305 - 11947.717: 57.3621% ( 9) 00:09:17.191 11947.717 - 11998.129: 57.4540% ( 10) 00:09:17.191 11998.129 - 12048.542: 57.5368% ( 9) 00:09:17.191 12048.542 - 12098.954: 57.6195% ( 9) 00:09:17.191 12098.954 - 12149.366: 57.7206% ( 11) 00:09:17.191 12149.366 - 12199.778: 57.8033% ( 9) 00:09:17.191 12199.778 - 12250.191: 57.8860% ( 9) 00:09:17.191 12250.191 - 12300.603: 57.9596% ( 8) 00:09:17.191 12300.603 - 12351.015: 58.0239% ( 7) 00:09:17.191 12351.015 - 12401.428: 58.0790% ( 6) 00:09:17.191 12401.428 - 12451.840: 58.1250% ( 5) 00:09:17.191 12451.840 - 12502.252: 58.1710% ( 5) 00:09:17.191 12502.252 - 12552.665: 58.2169% ( 5) 00:09:17.191 12552.665 - 12603.077: 58.2445% ( 3) 00:09:17.191 12603.077 - 12653.489: 58.2721% ( 3) 00:09:17.191 12653.489 - 12703.902: 58.3180% ( 5) 00:09:17.191 12703.902 - 12754.314: 58.3732% ( 6) 00:09:17.191 12754.314 - 12804.726: 58.4559% ( 9) 00:09:17.191 12804.726 - 12855.138: 58.5386% ( 9) 00:09:17.191 12855.138 - 12905.551: 58.6029% ( 7) 00:09:17.191 12905.551 - 13006.375: 58.7592% ( 17) 00:09:17.191 13006.375 - 13107.200: 59.0349% ( 30) 00:09:17.191 13107.200 - 13208.025: 59.3750% ( 37) 00:09:17.191 13208.025 - 13308.849: 59.7886% ( 45) 00:09:17.191 13308.849 - 13409.674: 60.3585% ( 62) 00:09:17.191 13409.674 - 13510.498: 60.8915% ( 58) 00:09:17.191 13510.498 - 13611.323: 61.5074% ( 67) 00:09:17.191 13611.323 - 13712.148: 62.1507% ( 70) 00:09:17.191 13712.148 - 13812.972: 62.8033% ( 71) 00:09:17.191 13812.972 - 13913.797: 63.4926% ( 75) 00:09:17.191 13913.797 - 14014.622: 64.2463% ( 82) 00:09:17.191 14014.622 - 14115.446: 65.1471% ( 98) 00:09:17.191 14115.446 - 14216.271: 66.0110% ( 94) 00:09:17.191 14216.271 - 14317.095: 66.9393% ( 101) 00:09:17.191 14317.095 - 14417.920: 67.9136% ( 106) 00:09:17.191 14417.920 - 14518.745: 68.9246% ( 110) 00:09:17.191 14518.745 - 14619.569: 69.9632% ( 113) 00:09:17.191 14619.569 - 14720.394: 71.0478% ( 118) 00:09:17.191 14720.394 - 14821.218: 72.1507% ( 120) 00:09:17.191 14821.218 - 14922.043: 73.3088% ( 126) 00:09:17.191 14922.043 - 15022.868: 74.3842% ( 117) 00:09:17.191 15022.868 - 15123.692: 75.4228% ( 113) 00:09:17.191 15123.692 - 15224.517: 76.4338% ( 110) 00:09:17.191 15224.517 - 15325.342: 77.4816% ( 114) 00:09:17.191 15325.342 - 15426.166: 78.4743% ( 108) 00:09:17.191 15426.166 - 15526.991: 79.3566% ( 96) 00:09:17.191 15526.991 - 15627.815: 80.2390% ( 96) 00:09:17.191 15627.815 - 15728.640: 81.1765% ( 102) 00:09:17.191 15728.640 - 15829.465: 82.1140% ( 102) 00:09:17.191 15829.465 - 15930.289: 82.9688% ( 93) 00:09:17.191 15930.289 - 16031.114: 83.7684% ( 87) 00:09:17.191 16031.114 - 16131.938: 84.5221% ( 82) 00:09:17.191 16131.938 - 16232.763: 85.1930% ( 73) 00:09:17.191 16232.763 - 16333.588: 85.9283% ( 80) 00:09:17.191 16333.588 - 16434.412: 86.6452% ( 78) 00:09:17.191 16434.412 - 16535.237: 87.3529% ( 77) 00:09:17.191 16535.237 - 16636.062: 87.9963% ( 70) 00:09:17.191 16636.062 - 16736.886: 88.5662% ( 62) 00:09:17.191 16736.886 - 16837.711: 89.0441% ( 52) 00:09:17.191 16837.711 - 16938.535: 89.4669% ( 46) 00:09:17.191 16938.535 - 17039.360: 89.8621% ( 43) 00:09:17.191 17039.360 - 17140.185: 90.2574% ( 43) 00:09:17.191 17140.185 - 17241.009: 90.6526% ( 43) 00:09:17.191 17241.009 - 17341.834: 91.0294% ( 41) 00:09:17.191 17341.834 - 17442.658: 91.4522% ( 46) 00:09:17.191 17442.658 - 17543.483: 91.9761% ( 57) 00:09:17.192 17543.483 - 17644.308: 92.4632% ( 53) 00:09:17.192 17644.308 - 17745.132: 92.9320% ( 51) 00:09:17.192 17745.132 - 17845.957: 93.3548% ( 46) 00:09:17.192 17845.957 - 17946.782: 93.6949% ( 37) 00:09:17.192 17946.782 - 18047.606: 94.1085% ( 45) 00:09:17.192 18047.606 - 18148.431: 94.4761% ( 40) 00:09:17.192 18148.431 - 18249.255: 94.8070% ( 36) 00:09:17.192 18249.255 - 18350.080: 95.1011% ( 32) 00:09:17.192 18350.080 - 18450.905: 95.3493% ( 27) 00:09:17.192 18450.905 - 18551.729: 95.5607% ( 23) 00:09:17.192 18551.729 - 18652.554: 95.8272% ( 29) 00:09:17.192 18652.554 - 18753.378: 96.0846% ( 28) 00:09:17.192 18753.378 - 18854.203: 96.3511% ( 29) 00:09:17.192 18854.203 - 18955.028: 96.5717% ( 24) 00:09:17.192 18955.028 - 19055.852: 96.8199% ( 27) 00:09:17.192 19055.852 - 19156.677: 97.0588% ( 26) 00:09:17.192 19156.677 - 19257.502: 97.2794% ( 24) 00:09:17.192 19257.502 - 19358.326: 97.5000% ( 24) 00:09:17.192 19358.326 - 19459.151: 97.7022% ( 22) 00:09:17.192 19459.151 - 19559.975: 97.8676% ( 18) 00:09:17.192 19559.975 - 19660.800: 97.9504% ( 9) 00:09:17.192 19660.800 - 19761.625: 98.0239% ( 8) 00:09:17.192 19761.625 - 19862.449: 98.1066% ( 9) 00:09:17.192 19862.449 - 19963.274: 98.1985% ( 10) 00:09:17.192 19963.274 - 20064.098: 98.2812% ( 9) 00:09:17.192 20064.098 - 20164.923: 98.3732% ( 10) 00:09:17.192 20164.923 - 20265.748: 98.4651% ( 10) 00:09:17.192 20265.748 - 20366.572: 98.5386% ( 8) 00:09:17.192 20366.572 - 20467.397: 98.5938% ( 6) 00:09:17.192 20467.397 - 20568.222: 98.6489% ( 6) 00:09:17.192 20568.222 - 20669.046: 98.6949% ( 5) 00:09:17.192 20669.046 - 20769.871: 98.7592% ( 7) 00:09:17.192 20769.871 - 20870.695: 98.7960% ( 4) 00:09:17.192 20870.695 - 20971.520: 98.8143% ( 2) 00:09:17.192 20971.520 - 21072.345: 98.8235% ( 1) 00:09:17.192 32465.526 - 32667.175: 98.9062% ( 9) 00:09:17.192 32667.175 - 32868.825: 99.0165% ( 12) 00:09:17.192 32868.825 - 33070.474: 99.1176% ( 11) 00:09:17.192 33070.474 - 33272.123: 99.2188% ( 11) 00:09:17.192 33272.123 - 33473.772: 99.3290% ( 12) 00:09:17.192 33473.772 - 33675.422: 99.4301% ( 11) 00:09:17.192 33675.422 - 33877.071: 99.5221% ( 10) 00:09:17.192 33877.071 - 34078.720: 99.6324% ( 12) 00:09:17.192 34078.720 - 34280.369: 99.7243% ( 10) 00:09:17.192 34280.369 - 34482.018: 99.8254% ( 11) 00:09:17.192 34482.018 - 34683.668: 99.9357% ( 12) 00:09:17.192 34683.668 - 34885.317: 100.0000% ( 7) 00:09:17.192 00:09:17.192 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:17.192 ============================================================================== 00:09:17.192 Range in us Cumulative IO count 00:09:17.192 4537.108 - 4562.314: 0.0551% ( 6) 00:09:17.192 4562.314 - 4587.520: 0.0735% ( 2) 00:09:17.192 4587.520 - 4612.726: 0.0827% ( 1) 00:09:17.192 4612.726 - 4637.932: 0.1011% ( 2) 00:09:17.192 4637.932 - 4663.138: 0.1195% ( 2) 00:09:17.192 4663.138 - 4688.345: 0.1471% ( 3) 00:09:17.192 4688.345 - 4713.551: 0.1654% ( 2) 00:09:17.192 4713.551 - 4738.757: 0.1838% ( 2) 00:09:17.192 4738.757 - 4763.963: 0.2022% ( 2) 00:09:17.192 4763.963 - 4789.169: 0.2206% ( 2) 00:09:17.192 4789.169 - 4814.375: 0.2482% ( 3) 00:09:17.192 4814.375 - 4839.582: 0.2665% ( 2) 00:09:17.192 4839.582 - 4864.788: 0.2849% ( 2) 00:09:17.192 4864.788 - 4889.994: 0.3033% ( 2) 00:09:17.192 4889.994 - 4915.200: 0.3309% ( 3) 00:09:17.192 4915.200 - 4940.406: 0.3493% ( 2) 00:09:17.192 4940.406 - 4965.612: 0.3676% ( 2) 00:09:17.192 4965.612 - 4990.818: 0.3860% ( 2) 00:09:17.192 4990.818 - 5016.025: 0.4136% ( 3) 00:09:17.192 5016.025 - 5041.231: 0.4320% ( 2) 00:09:17.192 5041.231 - 5066.437: 0.4504% ( 2) 00:09:17.192 5066.437 - 5091.643: 0.4688% ( 2) 00:09:17.192 5091.643 - 5116.849: 0.4963% ( 3) 00:09:17.192 5116.849 - 5142.055: 0.5147% ( 2) 00:09:17.192 5142.055 - 5167.262: 0.5331% ( 2) 00:09:17.192 5167.262 - 5192.468: 0.5515% ( 2) 00:09:17.192 5192.468 - 5217.674: 0.5790% ( 3) 00:09:17.192 5217.674 - 5242.880: 0.5974% ( 2) 00:09:17.192 5242.880 - 5268.086: 0.6158% ( 2) 00:09:17.192 5268.086 - 5293.292: 0.6342% ( 2) 00:09:17.192 5293.292 - 5318.498: 0.6526% ( 2) 00:09:17.192 5318.498 - 5343.705: 0.6801% ( 3) 00:09:17.192 5343.705 - 5368.911: 0.6985% ( 2) 00:09:17.192 5368.911 - 5394.117: 0.7169% ( 2) 00:09:17.192 5394.117 - 5419.323: 0.7353% ( 2) 00:09:17.192 5419.323 - 5444.529: 0.7629% ( 3) 00:09:17.192 5444.529 - 5469.735: 0.7812% ( 2) 00:09:17.192 5469.735 - 5494.942: 0.7996% ( 2) 00:09:17.192 5494.942 - 5520.148: 0.8180% ( 2) 00:09:17.192 5520.148 - 5545.354: 0.8456% ( 3) 00:09:17.192 5545.354 - 5570.560: 0.8640% ( 2) 00:09:17.192 5570.560 - 5595.766: 0.8824% ( 2) 00:09:17.192 5595.766 - 5620.972: 0.8915% ( 1) 00:09:17.192 5620.972 - 5646.178: 0.9191% ( 3) 00:09:17.192 5646.178 - 5671.385: 0.9375% ( 2) 00:09:17.192 5671.385 - 5696.591: 0.9559% ( 2) 00:09:17.192 5696.591 - 5721.797: 0.9743% ( 2) 00:09:17.192 5721.797 - 5747.003: 1.0018% ( 3) 00:09:17.192 5747.003 - 5772.209: 1.0202% ( 2) 00:09:17.192 5772.209 - 5797.415: 1.0386% ( 2) 00:09:17.192 5797.415 - 5822.622: 1.0570% ( 2) 00:09:17.192 5822.622 - 5847.828: 1.0846% ( 3) 00:09:17.192 5847.828 - 5873.034: 1.1029% ( 2) 00:09:17.192 5873.034 - 5898.240: 1.1213% ( 2) 00:09:17.192 5898.240 - 5923.446: 1.1397% ( 2) 00:09:17.192 5923.446 - 5948.652: 1.1581% ( 2) 00:09:17.192 5948.652 - 5973.858: 1.1765% ( 2) 00:09:17.192 7007.311 - 7057.723: 1.1949% ( 2) 00:09:17.192 7057.723 - 7108.135: 1.2684% ( 8) 00:09:17.192 7108.135 - 7158.548: 1.3971% ( 14) 00:09:17.192 7158.548 - 7208.960: 1.5257% ( 14) 00:09:17.192 7208.960 - 7259.372: 1.7555% ( 25) 00:09:17.192 7259.372 - 7309.785: 2.0404% ( 31) 00:09:17.192 7309.785 - 7360.197: 2.3989% ( 39) 00:09:17.192 7360.197 - 7410.609: 2.8585% ( 50) 00:09:17.192 7410.609 - 7461.022: 3.4926% ( 69) 00:09:17.192 7461.022 - 7511.434: 4.2096% ( 78) 00:09:17.192 7511.434 - 7561.846: 5.1930% ( 107) 00:09:17.192 7561.846 - 7612.258: 6.1949% ( 109) 00:09:17.192 7612.258 - 7662.671: 7.2886% ( 119) 00:09:17.192 7662.671 - 7713.083: 8.4007% ( 121) 00:09:17.192 7713.083 - 7763.495: 9.7426% ( 146) 00:09:17.192 7763.495 - 7813.908: 11.0294% ( 140) 00:09:17.192 7813.908 - 7864.320: 12.3070% ( 139) 00:09:17.192 7864.320 - 7914.732: 13.7132% ( 153) 00:09:17.192 7914.732 - 7965.145: 15.0643% ( 147) 00:09:17.192 7965.145 - 8015.557: 16.4430% ( 150) 00:09:17.192 8015.557 - 8065.969: 17.7665% ( 144) 00:09:17.192 8065.969 - 8116.382: 19.1636% ( 152) 00:09:17.192 8116.382 - 8166.794: 20.5331% ( 149) 00:09:17.192 8166.794 - 8217.206: 21.9118% ( 150) 00:09:17.192 8217.206 - 8267.618: 23.2812% ( 149) 00:09:17.192 8267.618 - 8318.031: 24.6324% ( 147) 00:09:17.192 8318.031 - 8368.443: 26.0202% ( 151) 00:09:17.192 8368.443 - 8418.855: 27.3897% ( 149) 00:09:17.192 8418.855 - 8469.268: 28.7132% ( 144) 00:09:17.192 8469.268 - 8519.680: 30.0643% ( 147) 00:09:17.192 8519.680 - 8570.092: 31.4614% ( 152) 00:09:17.192 8570.092 - 8620.505: 32.8217% ( 148) 00:09:17.192 8620.505 - 8670.917: 34.2279% ( 153) 00:09:17.192 8670.917 - 8721.329: 35.6250% ( 152) 00:09:17.192 8721.329 - 8771.742: 36.9301% ( 142) 00:09:17.192 8771.742 - 8822.154: 38.2353% ( 142) 00:09:17.192 8822.154 - 8872.566: 39.5037% ( 138) 00:09:17.192 8872.566 - 8922.978: 40.8088% ( 142) 00:09:17.192 8922.978 - 8973.391: 42.0221% ( 132) 00:09:17.192 8973.391 - 9023.803: 43.1710% ( 125) 00:09:17.192 9023.803 - 9074.215: 44.2647% ( 119) 00:09:17.192 9074.215 - 9124.628: 45.2665% ( 109) 00:09:17.192 9124.628 - 9175.040: 46.1765% ( 99) 00:09:17.192 9175.040 - 9225.452: 46.9577% ( 85) 00:09:17.192 9225.452 - 9275.865: 47.5551% ( 65) 00:09:17.192 9275.865 - 9326.277: 48.1066% ( 60) 00:09:17.192 9326.277 - 9376.689: 48.5662% ( 50) 00:09:17.192 9376.689 - 9427.102: 48.9522% ( 42) 00:09:17.192 9427.102 - 9477.514: 49.2555% ( 33) 00:09:17.192 9477.514 - 9527.926: 49.5221% ( 29) 00:09:17.192 9527.926 - 9578.338: 49.7243% ( 22) 00:09:17.192 9578.338 - 9628.751: 49.9449% ( 24) 00:09:17.192 9628.751 - 9679.163: 50.1195% ( 19) 00:09:17.192 9679.163 - 9729.575: 50.3125% ( 21) 00:09:17.192 9729.575 - 9779.988: 50.4871% ( 19) 00:09:17.192 9779.988 - 9830.400: 50.6710% ( 20) 00:09:17.192 9830.400 - 9880.812: 50.8180% ( 16) 00:09:17.192 9880.812 - 9931.225: 50.9926% ( 19) 00:09:17.192 9931.225 - 9981.637: 51.1397% ( 16) 00:09:17.192 9981.637 - 10032.049: 51.2960% ( 17) 00:09:17.192 10032.049 - 10082.462: 51.4522% ( 17) 00:09:17.192 10082.462 - 10132.874: 51.6176% ( 18) 00:09:17.192 10132.874 - 10183.286: 51.7739% ( 17) 00:09:17.192 10183.286 - 10233.698: 51.9577% ( 20) 00:09:17.192 10233.698 - 10284.111: 52.1415% ( 20) 00:09:17.192 10284.111 - 10334.523: 52.3254% ( 20) 00:09:17.192 10334.523 - 10384.935: 52.5184% ( 21) 00:09:17.192 10384.935 - 10435.348: 52.6930% ( 19) 00:09:17.192 10435.348 - 10485.760: 52.8860% ( 21) 00:09:17.192 10485.760 - 10536.172: 53.0699% ( 20) 00:09:17.192 10536.172 - 10586.585: 53.2996% ( 25) 00:09:17.192 10586.585 - 10636.997: 53.5110% ( 23) 00:09:17.192 10636.997 - 10687.409: 53.7040% ( 21) 00:09:17.192 10687.409 - 10737.822: 53.9246% ( 24) 00:09:17.192 10737.822 - 10788.234: 54.1360% ( 23) 00:09:17.192 10788.234 - 10838.646: 54.4118% ( 30) 00:09:17.192 10838.646 - 10889.058: 54.6140% ( 22) 00:09:17.192 10889.058 - 10939.471: 54.8346% ( 24) 00:09:17.192 10939.471 - 10989.883: 55.0368% ( 22) 00:09:17.192 10989.883 - 11040.295: 55.2114% ( 19) 00:09:17.192 11040.295 - 11090.708: 55.3493% ( 15) 00:09:17.192 11090.708 - 11141.120: 55.5055% ( 17) 00:09:17.192 11141.120 - 11191.532: 55.6526% ( 16) 00:09:17.192 11191.532 - 11241.945: 55.7996% ( 16) 00:09:17.192 11241.945 - 11292.357: 55.9099% ( 12) 00:09:17.192 11292.357 - 11342.769: 56.0386% ( 14) 00:09:17.192 11342.769 - 11393.182: 56.1581% ( 13) 00:09:17.192 11393.182 - 11443.594: 56.2868% ( 14) 00:09:17.192 11443.594 - 11494.006: 56.3879% ( 11) 00:09:17.192 11494.006 - 11544.418: 56.5074% ( 13) 00:09:17.192 11544.418 - 11594.831: 56.6360% ( 14) 00:09:17.192 11594.831 - 11645.243: 56.7647% ( 14) 00:09:17.192 11645.243 - 11695.655: 56.8750% ( 12) 00:09:17.192 11695.655 - 11746.068: 56.9853% ( 12) 00:09:17.192 11746.068 - 11796.480: 57.0956% ( 12) 00:09:17.192 11796.480 - 11846.892: 57.2243% ( 14) 00:09:17.192 11846.892 - 11897.305: 57.3438% ( 13) 00:09:17.192 11897.305 - 11947.717: 57.4449% ( 11) 00:09:17.192 11947.717 - 11998.129: 57.5643% ( 13) 00:09:17.192 11998.129 - 12048.542: 57.6930% ( 14) 00:09:17.192 12048.542 - 12098.954: 57.8125% ( 13) 00:09:17.192 12098.954 - 12149.366: 57.9228% ( 12) 00:09:17.192 12149.366 - 12199.778: 58.0423% ( 13) 00:09:17.192 12199.778 - 12250.191: 58.1434% ( 11) 00:09:17.192 12250.191 - 12300.603: 58.2445% ( 11) 00:09:17.192 12300.603 - 12351.015: 58.3456% ( 11) 00:09:17.192 12351.015 - 12401.428: 58.4283% ( 9) 00:09:17.192 12401.428 - 12451.840: 58.5478% ( 13) 00:09:17.192 12451.840 - 12502.252: 58.6673% ( 13) 00:09:17.192 12502.252 - 12552.665: 58.7592% ( 10) 00:09:17.192 12552.665 - 12603.077: 58.8327% ( 8) 00:09:17.192 12603.077 - 12653.489: 58.9430% ( 12) 00:09:17.192 12653.489 - 12703.902: 59.0533% ( 12) 00:09:17.192 12703.902 - 12754.314: 59.1820% ( 14) 00:09:17.192 12754.314 - 12804.726: 59.2739% ( 10) 00:09:17.192 12804.726 - 12855.138: 59.3842% ( 12) 00:09:17.192 12855.138 - 12905.551: 59.4761% ( 10) 00:09:17.192 12905.551 - 13006.375: 59.7610% ( 31) 00:09:17.192 13006.375 - 13107.200: 60.0184% ( 28) 00:09:17.192 13107.200 - 13208.025: 60.3217% ( 33) 00:09:17.192 13208.025 - 13308.849: 60.6250% ( 33) 00:09:17.192 13308.849 - 13409.674: 60.9467% ( 35) 00:09:17.192 13409.674 - 13510.498: 61.4062% ( 50) 00:09:17.192 13510.498 - 13611.323: 62.1048% ( 76) 00:09:17.192 13611.323 - 13712.148: 62.6379% ( 58) 00:09:17.192 13712.148 - 13812.972: 63.2353% ( 65) 00:09:17.192 13812.972 - 13913.797: 63.8419% ( 66) 00:09:17.192 13913.797 - 14014.622: 64.5129% ( 73) 00:09:17.192 14014.622 - 14115.446: 65.1654% ( 71) 00:09:17.192 14115.446 - 14216.271: 66.0846% ( 100) 00:09:17.192 14216.271 - 14317.095: 66.8566% ( 84) 00:09:17.192 14317.095 - 14417.920: 67.7665% ( 99) 00:09:17.192 14417.920 - 14518.745: 68.7224% ( 104) 00:09:17.192 14518.745 - 14619.569: 69.9540% ( 134) 00:09:17.192 14619.569 - 14720.394: 71.2224% ( 138) 00:09:17.192 14720.394 - 14821.218: 72.4816% ( 137) 00:09:17.192 14821.218 - 14922.043: 73.7316% ( 136) 00:09:17.192 14922.043 - 15022.868: 74.9632% ( 134) 00:09:17.192 15022.868 - 15123.692: 76.0662% ( 120) 00:09:17.192 15123.692 - 15224.517: 77.1140% ( 114) 00:09:17.192 15224.517 - 15325.342: 78.1618% ( 114) 00:09:17.192 15325.342 - 15426.166: 79.1912% ( 112) 00:09:17.192 15426.166 - 15526.991: 80.2482% ( 115) 00:09:17.192 15526.991 - 15627.815: 81.2132% ( 105) 00:09:17.192 15627.815 - 15728.640: 82.0956% ( 96) 00:09:17.192 15728.640 - 15829.465: 82.8217% ( 79) 00:09:17.192 15829.465 - 15930.289: 83.5294% ( 77) 00:09:17.192 15930.289 - 16031.114: 84.2739% ( 81) 00:09:17.192 16031.114 - 16131.938: 84.9724% ( 76) 00:09:17.192 16131.938 - 16232.763: 85.5882% ( 67) 00:09:17.192 16232.763 - 16333.588: 86.2040% ( 67) 00:09:17.192 16333.588 - 16434.412: 86.8382% ( 69) 00:09:17.192 16434.412 - 16535.237: 87.4540% ( 67) 00:09:17.192 16535.237 - 16636.062: 87.9504% ( 54) 00:09:17.192 16636.062 - 16736.886: 88.4099% ( 50) 00:09:17.192 16736.886 - 16837.711: 88.8695% ( 50) 00:09:17.192 16837.711 - 16938.535: 89.3107% ( 48) 00:09:17.192 16938.535 - 17039.360: 89.7151% ( 44) 00:09:17.192 17039.360 - 17140.185: 90.0827% ( 40) 00:09:17.192 17140.185 - 17241.009: 90.4963% ( 45) 00:09:17.192 17241.009 - 17341.834: 90.9007% ( 44) 00:09:17.192 17341.834 - 17442.658: 91.2592% ( 39) 00:09:17.192 17442.658 - 17543.483: 91.6268% ( 40) 00:09:17.192 17543.483 - 17644.308: 91.9853% ( 39) 00:09:17.192 17644.308 - 17745.132: 92.3070% ( 35) 00:09:17.193 17745.132 - 17845.957: 92.6379% ( 36) 00:09:17.193 17845.957 - 17946.782: 92.9871% ( 38) 00:09:17.193 17946.782 - 18047.606: 93.2629% ( 30) 00:09:17.193 18047.606 - 18148.431: 93.5754% ( 34) 00:09:17.193 18148.431 - 18249.255: 93.9430% ( 40) 00:09:17.193 18249.255 - 18350.080: 94.3290% ( 42) 00:09:17.193 18350.080 - 18450.905: 94.7335% ( 44) 00:09:17.193 18450.905 - 18551.729: 95.1195% ( 42) 00:09:17.193 18551.729 - 18652.554: 95.5147% ( 43) 00:09:17.193 18652.554 - 18753.378: 95.9007% ( 42) 00:09:17.193 18753.378 - 18854.203: 96.2224% ( 35) 00:09:17.193 18854.203 - 18955.028: 96.5349% ( 34) 00:09:17.193 18955.028 - 19055.852: 96.8107% ( 30) 00:09:17.193 19055.852 - 19156.677: 97.0037% ( 21) 00:09:17.193 19156.677 - 19257.502: 97.1783% ( 19) 00:09:17.193 19257.502 - 19358.326: 97.3162% ( 15) 00:09:17.193 19358.326 - 19459.151: 97.4449% ( 14) 00:09:17.193 19459.151 - 19559.975: 97.5643% ( 13) 00:09:17.193 19559.975 - 19660.800: 97.6562% ( 10) 00:09:17.193 19660.800 - 19761.625: 97.7206% ( 7) 00:09:17.193 19761.625 - 19862.449: 97.8125% ( 10) 00:09:17.193 19862.449 - 19963.274: 97.9136% ( 11) 00:09:17.193 19963.274 - 20064.098: 97.9963% ( 9) 00:09:17.193 20064.098 - 20164.923: 98.0790% ( 9) 00:09:17.193 20164.923 - 20265.748: 98.1526% ( 8) 00:09:17.193 20265.748 - 20366.572: 98.2353% ( 9) 00:09:17.193 20366.572 - 20467.397: 98.3272% ( 10) 00:09:17.193 20467.397 - 20568.222: 98.4099% ( 9) 00:09:17.193 20568.222 - 20669.046: 98.4926% ( 9) 00:09:17.193 20669.046 - 20769.871: 98.5294% ( 4) 00:09:17.193 20769.871 - 20870.695: 98.5754% ( 5) 00:09:17.193 20870.695 - 20971.520: 98.6029% ( 3) 00:09:17.193 20971.520 - 21072.345: 98.6489% ( 5) 00:09:17.193 21072.345 - 21173.169: 98.6949% ( 5) 00:09:17.193 21173.169 - 21273.994: 98.7408% ( 5) 00:09:17.193 21273.994 - 21374.818: 98.7960% ( 6) 00:09:17.193 21374.818 - 21475.643: 98.8235% ( 3) 00:09:17.193 33070.474 - 33272.123: 98.9154% ( 10) 00:09:17.193 33272.123 - 33473.772: 99.0165% ( 11) 00:09:17.193 33473.772 - 33675.422: 99.1176% ( 11) 00:09:17.193 33675.422 - 33877.071: 99.2096% ( 10) 00:09:17.193 33877.071 - 34078.720: 99.3199% ( 12) 00:09:17.193 34078.720 - 34280.369: 99.4210% ( 11) 00:09:17.193 34280.369 - 34482.018: 99.5129% ( 10) 00:09:17.193 34482.018 - 34683.668: 99.6140% ( 11) 00:09:17.193 34683.668 - 34885.317: 99.7151% ( 11) 00:09:17.193 34885.317 - 35086.966: 99.8162% ( 11) 00:09:17.193 35086.966 - 35288.615: 99.9173% ( 11) 00:09:17.193 35288.615 - 35490.265: 100.0000% ( 9) 00:09:17.193 00:09:17.193 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:17.193 ============================================================================== 00:09:17.193 Range in us Cumulative IO count 00:09:17.193 3856.542 - 3881.748: 0.0363% ( 4) 00:09:17.193 3881.748 - 3906.954: 0.0454% ( 1) 00:09:17.193 3906.954 - 3932.160: 0.0636% ( 2) 00:09:17.193 3932.160 - 3957.366: 0.0818% ( 2) 00:09:17.193 3957.366 - 3982.572: 0.0999% ( 2) 00:09:17.193 3982.572 - 4007.778: 0.1272% ( 3) 00:09:17.193 4007.778 - 4032.985: 0.1544% ( 3) 00:09:17.193 4032.985 - 4058.191: 0.1635% ( 1) 00:09:17.193 4058.191 - 4083.397: 0.1817% ( 2) 00:09:17.193 4083.397 - 4108.603: 0.2089% ( 3) 00:09:17.193 4108.603 - 4133.809: 0.2180% ( 1) 00:09:17.193 4133.809 - 4159.015: 0.2453% ( 3) 00:09:17.193 4159.015 - 4184.222: 0.2634% ( 2) 00:09:17.193 4184.222 - 4209.428: 0.2816% ( 2) 00:09:17.193 4209.428 - 4234.634: 0.2998% ( 2) 00:09:17.193 4234.634 - 4259.840: 0.3180% ( 2) 00:09:17.193 4259.840 - 4285.046: 0.3452% ( 3) 00:09:17.193 4285.046 - 4310.252: 0.3634% ( 2) 00:09:17.193 4310.252 - 4335.458: 0.3815% ( 2) 00:09:17.193 4335.458 - 4360.665: 0.3997% ( 2) 00:09:17.193 4360.665 - 4385.871: 0.4179% ( 2) 00:09:17.193 4385.871 - 4411.077: 0.4451% ( 3) 00:09:17.193 4411.077 - 4436.283: 0.4633% ( 2) 00:09:17.193 4436.283 - 4461.489: 0.4815% ( 2) 00:09:17.193 4461.489 - 4486.695: 0.4996% ( 2) 00:09:17.193 4486.695 - 4511.902: 0.5178% ( 2) 00:09:17.193 4511.902 - 4537.108: 0.5360% ( 2) 00:09:17.193 4537.108 - 4562.314: 0.5541% ( 2) 00:09:17.193 4562.314 - 4587.520: 0.5723% ( 2) 00:09:17.193 4587.520 - 4612.726: 0.5905% ( 2) 00:09:17.193 4612.726 - 4637.932: 0.6086% ( 2) 00:09:17.193 4637.932 - 4663.138: 0.6359% ( 3) 00:09:17.193 4663.138 - 4688.345: 0.6541% ( 2) 00:09:17.193 4688.345 - 4713.551: 0.6722% ( 2) 00:09:17.193 4713.551 - 4738.757: 0.6904% ( 2) 00:09:17.193 4738.757 - 4763.963: 0.7086% ( 2) 00:09:17.193 4763.963 - 4789.169: 0.7358% ( 3) 00:09:17.193 4789.169 - 4814.375: 0.7540% ( 2) 00:09:17.193 4814.375 - 4839.582: 0.7722% ( 2) 00:09:17.193 4839.582 - 4864.788: 0.7903% ( 2) 00:09:17.193 4864.788 - 4889.994: 0.8176% ( 3) 00:09:17.193 4889.994 - 4915.200: 0.8358% ( 2) 00:09:17.193 4915.200 - 4940.406: 0.8539% ( 2) 00:09:17.193 4940.406 - 4965.612: 0.8721% ( 2) 00:09:17.193 4965.612 - 4990.818: 0.8903% ( 2) 00:09:17.193 4990.818 - 5016.025: 0.9175% ( 3) 00:09:17.193 5016.025 - 5041.231: 0.9357% ( 2) 00:09:17.193 5041.231 - 5066.437: 0.9539% ( 2) 00:09:17.193 5066.437 - 5091.643: 0.9720% ( 2) 00:09:17.193 5091.643 - 5116.849: 0.9902% ( 2) 00:09:17.193 5116.849 - 5142.055: 1.0084% ( 2) 00:09:17.193 5142.055 - 5167.262: 1.0265% ( 2) 00:09:17.193 5167.262 - 5192.468: 1.0538% ( 3) 00:09:17.193 5192.468 - 5217.674: 1.0719% ( 2) 00:09:17.193 5217.674 - 5242.880: 1.0901% ( 2) 00:09:17.193 5242.880 - 5268.086: 1.1083% ( 2) 00:09:17.193 5268.086 - 5293.292: 1.1355% ( 3) 00:09:17.193 5293.292 - 5318.498: 1.1446% ( 1) 00:09:17.193 5318.498 - 5343.705: 1.1628% ( 2) 00:09:17.193 7007.311 - 7057.723: 1.1991% ( 4) 00:09:17.193 7057.723 - 7108.135: 1.2355% ( 4) 00:09:17.193 7108.135 - 7158.548: 1.2900% ( 6) 00:09:17.193 7158.548 - 7208.960: 1.3626% ( 8) 00:09:17.193 7208.960 - 7259.372: 1.5080% ( 16) 00:09:17.193 7259.372 - 7309.785: 1.7805% ( 30) 00:09:17.193 7309.785 - 7360.197: 2.1984% ( 46) 00:09:17.193 7360.197 - 7410.609: 2.8252% ( 69) 00:09:17.193 7410.609 - 7461.022: 3.5792% ( 83) 00:09:17.193 7461.022 - 7511.434: 4.3060% ( 80) 00:09:17.193 7511.434 - 7561.846: 5.1054% ( 88) 00:09:17.193 7561.846 - 7612.258: 6.0683% ( 106) 00:09:17.193 7612.258 - 7662.671: 7.0585% ( 109) 00:09:17.193 7662.671 - 7713.083: 8.2576% ( 132) 00:09:17.193 7713.083 - 7763.495: 9.5113% ( 138) 00:09:17.193 7763.495 - 7813.908: 10.7649% ( 138) 00:09:17.193 7813.908 - 7864.320: 12.1548% ( 153) 00:09:17.193 7864.320 - 7914.732: 13.5901% ( 158) 00:09:17.193 7914.732 - 7965.145: 15.0254% ( 158) 00:09:17.193 7965.145 - 8015.557: 16.3608% ( 147) 00:09:17.193 8015.557 - 8065.969: 17.6690% ( 144) 00:09:17.193 8065.969 - 8116.382: 19.0134% ( 148) 00:09:17.193 8116.382 - 8166.794: 20.3579% ( 148) 00:09:17.193 8166.794 - 8217.206: 21.7024% ( 148) 00:09:17.193 8217.206 - 8267.618: 23.0741% ( 151) 00:09:17.193 8267.618 - 8318.031: 24.4640% ( 153) 00:09:17.193 8318.031 - 8368.443: 25.7994% ( 147) 00:09:17.193 8368.443 - 8418.855: 27.1621% ( 150) 00:09:17.193 8418.855 - 8469.268: 28.5247% ( 150) 00:09:17.193 8469.268 - 8519.680: 29.8964% ( 151) 00:09:17.193 8519.680 - 8570.092: 31.2500% ( 149) 00:09:17.193 8570.092 - 8620.505: 32.6217% ( 151) 00:09:17.193 8620.505 - 8670.917: 34.0025% ( 152) 00:09:17.193 8670.917 - 8721.329: 35.3470% ( 148) 00:09:17.193 8721.329 - 8771.742: 36.6824% ( 147) 00:09:17.193 8771.742 - 8822.154: 38.0178% ( 147) 00:09:17.193 8822.154 - 8872.566: 39.3805% ( 150) 00:09:17.193 8872.566 - 8922.978: 40.6977% ( 145) 00:09:17.193 8922.978 - 8973.391: 41.9876% ( 142) 00:09:17.193 8973.391 - 9023.803: 43.0596% ( 118) 00:09:17.193 9023.803 - 9074.215: 44.0770% ( 112) 00:09:17.193 9074.215 - 9124.628: 45.0672% ( 109) 00:09:17.193 9124.628 - 9175.040: 45.8031% ( 81) 00:09:17.193 9175.040 - 9225.452: 46.4571% ( 72) 00:09:17.193 9225.452 - 9275.865: 47.0839% ( 69) 00:09:17.193 9275.865 - 9326.277: 47.6835% ( 66) 00:09:17.193 9326.277 - 9376.689: 48.2013% ( 57) 00:09:17.193 9376.689 - 9427.102: 48.6101% ( 45) 00:09:17.193 9427.102 - 9477.514: 48.9462% ( 37) 00:09:17.193 9477.514 - 9527.926: 49.1824% ( 26) 00:09:17.193 9527.926 - 9578.338: 49.4004% ( 24) 00:09:17.193 9578.338 - 9628.751: 49.6003% ( 22) 00:09:17.193 9628.751 - 9679.163: 49.7638% ( 18) 00:09:17.193 9679.163 - 9729.575: 49.9182% ( 17) 00:09:17.193 9729.575 - 9779.988: 50.0727% ( 17) 00:09:17.193 9779.988 - 9830.400: 50.2362% ( 18) 00:09:17.193 9830.400 - 9880.812: 50.4088% ( 19) 00:09:17.193 9880.812 - 9931.225: 50.5541% ( 16) 00:09:17.193 9931.225 - 9981.637: 50.7267% ( 19) 00:09:17.193 9981.637 - 10032.049: 50.8721% ( 16) 00:09:17.193 10032.049 - 10082.462: 51.0265% ( 17) 00:09:17.193 10082.462 - 10132.874: 51.2173% ( 21) 00:09:17.193 10132.874 - 10183.286: 51.3808% ( 18) 00:09:17.193 10183.286 - 10233.698: 51.5898% ( 23) 00:09:17.193 10233.698 - 10284.111: 51.7714% ( 20) 00:09:17.193 10284.111 - 10334.523: 51.9622% ( 21) 00:09:17.193 10334.523 - 10384.935: 52.1439% ( 20) 00:09:17.193 10384.935 - 10435.348: 52.3347% ( 21) 00:09:17.193 10435.348 - 10485.760: 52.5345% ( 22) 00:09:17.193 10485.760 - 10536.172: 52.7435% ( 23) 00:09:17.193 10536.172 - 10586.585: 52.9706% ( 25) 00:09:17.193 10586.585 - 10636.997: 53.1704% ( 22) 00:09:17.193 10636.997 - 10687.409: 53.3612% ( 21) 00:09:17.193 10687.409 - 10737.822: 53.5520% ( 21) 00:09:17.193 10737.822 - 10788.234: 53.7155% ( 18) 00:09:17.193 10788.234 - 10838.646: 53.8972% ( 20) 00:09:17.193 10838.646 - 10889.058: 54.0789% ( 20) 00:09:17.193 10889.058 - 10939.471: 54.2333% ( 17) 00:09:17.193 10939.471 - 10989.883: 54.3605% ( 14) 00:09:17.193 10989.883 - 11040.295: 54.4604% ( 11) 00:09:17.193 11040.295 - 11090.708: 54.5967% ( 15) 00:09:17.193 11090.708 - 11141.120: 54.7420% ( 16) 00:09:17.193 11141.120 - 11191.532: 54.8783% ( 15) 00:09:17.193 11191.532 - 11241.945: 55.0055% ( 14) 00:09:17.193 11241.945 - 11292.357: 55.1145% ( 12) 00:09:17.193 11292.357 - 11342.769: 55.2235% ( 12) 00:09:17.193 11342.769 - 11393.182: 55.3507% ( 14) 00:09:17.193 11393.182 - 11443.594: 55.4778% ( 14) 00:09:17.193 11443.594 - 11494.006: 55.5868% ( 12) 00:09:17.193 11494.006 - 11544.418: 55.7049% ( 13) 00:09:17.193 11544.418 - 11594.831: 55.8230% ( 13) 00:09:17.193 11594.831 - 11645.243: 55.9411% ( 13) 00:09:17.193 11645.243 - 11695.655: 56.0774% ( 15) 00:09:17.193 11695.655 - 11746.068: 56.2318% ( 17) 00:09:17.193 11746.068 - 11796.480: 56.3590% ( 14) 00:09:17.193 11796.480 - 11846.892: 56.5134% ( 17) 00:09:17.193 11846.892 - 11897.305: 56.6679% ( 17) 00:09:17.193 11897.305 - 11947.717: 56.8132% ( 16) 00:09:17.193 11947.717 - 11998.129: 56.9586% ( 16) 00:09:17.193 11998.129 - 12048.542: 57.1039% ( 16) 00:09:17.193 12048.542 - 12098.954: 57.2674% ( 18) 00:09:17.193 12098.954 - 12149.366: 57.4400% ( 19) 00:09:17.193 12149.366 - 12199.778: 57.5854% ( 16) 00:09:17.193 12199.778 - 12250.191: 57.7307% ( 16) 00:09:17.193 12250.191 - 12300.603: 57.8943% ( 18) 00:09:17.193 12300.603 - 12351.015: 58.0396% ( 16) 00:09:17.193 12351.015 - 12401.428: 58.1940% ( 17) 00:09:17.193 12401.428 - 12451.840: 58.3394% ( 16) 00:09:17.193 12451.840 - 12502.252: 58.4938% ( 17) 00:09:17.193 12502.252 - 12552.665: 58.6392% ( 16) 00:09:17.193 12552.665 - 12603.077: 58.7936% ( 17) 00:09:17.193 12603.077 - 12653.489: 58.9208% ( 14) 00:09:17.193 12653.489 - 12703.902: 59.0661% ( 16) 00:09:17.193 12703.902 - 12754.314: 59.2206% ( 17) 00:09:17.193 12754.314 - 12804.726: 59.3387% ( 13) 00:09:17.193 12804.726 - 12855.138: 59.4568% ( 13) 00:09:17.193 12855.138 - 12905.551: 59.5839% ( 14) 00:09:17.193 12905.551 - 13006.375: 59.9291% ( 38) 00:09:17.193 13006.375 - 13107.200: 60.2743% ( 38) 00:09:17.193 13107.200 - 13208.025: 60.6650% ( 43) 00:09:17.193 13208.025 - 13308.849: 61.0647% ( 44) 00:09:17.193 13308.849 - 13409.674: 61.5461% ( 53) 00:09:17.193 13409.674 - 13510.498: 62.1094% ( 62) 00:09:17.193 13510.498 - 13611.323: 62.7453% ( 70) 00:09:17.193 13611.323 - 13712.148: 63.3721% ( 69) 00:09:17.193 13712.148 - 13812.972: 63.9353% ( 62) 00:09:17.193 13812.972 - 13913.797: 64.5076% ( 63) 00:09:17.193 13913.797 - 14014.622: 65.2616% ( 83) 00:09:17.193 14014.622 - 14115.446: 66.0701% ( 89) 00:09:17.193 14115.446 - 14216.271: 66.9241% ( 94) 00:09:17.193 14216.271 - 14317.095: 67.8234% ( 99) 00:09:17.193 14317.095 - 14417.920: 68.9408% ( 123) 00:09:17.193 14417.920 - 14518.745: 69.9673% ( 113) 00:09:17.193 14518.745 - 14619.569: 71.1483% ( 130) 00:09:17.193 14619.569 - 14720.394: 72.3201% ( 129) 00:09:17.193 14720.394 - 14821.218: 73.4466% ( 124) 00:09:17.193 14821.218 - 14922.043: 74.6548% ( 133) 00:09:17.193 14922.043 - 15022.868: 75.8176% ( 128) 00:09:17.193 15022.868 - 15123.692: 76.8805% ( 117) 00:09:17.193 15123.692 - 15224.517: 77.8888% ( 111) 00:09:17.193 15224.517 - 15325.342: 78.8245% ( 103) 00:09:17.193 15325.342 - 15426.166: 79.8238% ( 110) 00:09:17.193 15426.166 - 15526.991: 80.8140% ( 109) 00:09:17.193 15526.991 - 15627.815: 81.7042% ( 98) 00:09:17.193 15627.815 - 15728.640: 82.5218% ( 90) 00:09:17.193 15728.640 - 15829.465: 83.3757% ( 94) 00:09:17.193 15829.465 - 15930.289: 84.0752% ( 77) 00:09:17.193 15930.289 - 16031.114: 84.7475% ( 74) 00:09:17.193 16031.114 - 16131.938: 85.4833% ( 81) 00:09:17.193 16131.938 - 16232.763: 86.1464% ( 73) 00:09:17.193 16232.763 - 16333.588: 86.7278% ( 64) 00:09:17.193 16333.588 - 16434.412: 87.2366% ( 56) 00:09:17.193 16434.412 - 16535.237: 87.6453% ( 45) 00:09:17.193 16535.237 - 16636.062: 87.9451% ( 33) 00:09:17.193 16636.062 - 16736.886: 88.2722% ( 36) 00:09:17.193 16736.886 - 16837.711: 88.5629% ( 32) 00:09:17.193 16837.711 - 16938.535: 88.8717% ( 34) 00:09:17.193 16938.535 - 17039.360: 89.2442% ( 41) 00:09:17.193 17039.360 - 17140.185: 89.6166% ( 41) 00:09:17.193 17140.185 - 17241.009: 89.9982% ( 42) 00:09:17.193 17241.009 - 17341.834: 90.3888% ( 43) 00:09:17.193 17341.834 - 17442.658: 90.7068% ( 35) 00:09:17.193 17442.658 - 17543.483: 91.0065% ( 33) 00:09:17.193 17543.483 - 17644.308: 91.3336% ( 36) 00:09:17.194 17644.308 - 17745.132: 91.6606% ( 36) 00:09:17.194 17745.132 - 17845.957: 92.0058% ( 38) 00:09:17.194 17845.957 - 17946.782: 92.3510% ( 38) 00:09:17.194 17946.782 - 18047.606: 92.7144% ( 40) 00:09:17.194 18047.606 - 18148.431: 93.0959% ( 42) 00:09:17.194 18148.431 - 18249.255: 93.4502% ( 39) 00:09:17.194 18249.255 - 18350.080: 93.8408% ( 43) 00:09:17.194 18350.080 - 18450.905: 94.2496% ( 45) 00:09:17.194 18450.905 - 18551.729: 94.6403% ( 43) 00:09:17.194 18551.729 - 18652.554: 95.0581% ( 46) 00:09:17.194 18652.554 - 18753.378: 95.4124% ( 39) 00:09:17.194 18753.378 - 18854.203: 95.7667% ( 39) 00:09:17.194 18854.203 - 18955.028: 96.1392% ( 41) 00:09:17.194 18955.028 - 19055.852: 96.4753% ( 37) 00:09:17.194 19055.852 - 19156.677: 96.8205% ( 38) 00:09:17.194 19156.677 - 19257.502: 97.0930% ( 30) 00:09:17.194 19257.502 - 19358.326: 97.3110% ( 24) 00:09:17.194 19358.326 - 19459.151: 97.5291% ( 24) 00:09:17.194 19459.151 - 19559.975: 97.6835% ( 17) 00:09:17.194 19559.975 - 19660.800: 97.7834% ( 11) 00:09:17.194 19660.800 - 19761.625: 97.9197% ( 15) 00:09:17.194 19761.625 - 19862.449: 98.0378% ( 13) 00:09:17.194 19862.449 - 19963.274: 98.1195% ( 9) 00:09:17.194 19963.274 - 20064.098: 98.2104% ( 10) 00:09:17.194 20064.098 - 20164.923: 98.2740% ( 7) 00:09:17.194 20164.923 - 20265.748: 98.3194% ( 5) 00:09:17.194 20265.748 - 20366.572: 98.3739% ( 6) 00:09:17.194 20366.572 - 20467.397: 98.4193% ( 5) 00:09:17.194 20467.397 - 20568.222: 98.4648% ( 5) 00:09:17.194 20568.222 - 20669.046: 98.5102% ( 5) 00:09:17.194 20669.046 - 20769.871: 98.5919% ( 9) 00:09:17.194 20769.871 - 20870.695: 98.6919% ( 11) 00:09:17.194 20870.695 - 20971.520: 98.7827% ( 10) 00:09:17.194 20971.520 - 21072.345: 98.8735% ( 10) 00:09:17.194 21072.345 - 21173.169: 98.9826% ( 12) 00:09:17.194 21173.169 - 21273.994: 99.0734% ( 10) 00:09:17.194 21273.994 - 21374.818: 99.1552% ( 9) 00:09:17.194 21374.818 - 21475.643: 99.2188% ( 7) 00:09:17.194 21475.643 - 21576.468: 99.2733% ( 6) 00:09:17.194 21576.468 - 21677.292: 99.3187% ( 5) 00:09:17.194 21677.292 - 21778.117: 99.3732% ( 6) 00:09:17.194 21778.117 - 21878.942: 99.4186% ( 5) 00:09:17.194 21878.942 - 21979.766: 99.4731% ( 6) 00:09:17.194 21979.766 - 22080.591: 99.5094% ( 4) 00:09:17.194 22080.591 - 22181.415: 99.5640% ( 6) 00:09:17.194 22181.415 - 22282.240: 99.6185% ( 6) 00:09:17.194 22282.240 - 22383.065: 99.6639% ( 5) 00:09:17.194 22383.065 - 22483.889: 99.7184% ( 6) 00:09:17.194 22483.889 - 22584.714: 99.7638% ( 5) 00:09:17.194 22584.714 - 22685.538: 99.8183% ( 6) 00:09:17.194 22685.538 - 22786.363: 99.8547% ( 4) 00:09:17.194 22786.363 - 22887.188: 99.9001% ( 5) 00:09:17.194 22887.188 - 22988.012: 99.9546% ( 6) 00:09:17.194 22988.012 - 23088.837: 100.0000% ( 5) 00:09:17.194 00:09:17.194 00:03:31 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:18.577 Initializing NVMe Controllers 00:09:18.577 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:18.577 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:18.577 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:18.577 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:18.577 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:18.577 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:18.577 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:18.577 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:18.577 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:18.577 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:18.577 Initialization complete. Launching workers. 00:09:18.577 ======================================================== 00:09:18.577 Latency(us) 00:09:18.577 Device Information : IOPS MiB/s Average min max 00:09:18.577 PCIE (0000:00:09.0) NSID 1 from core 0: 14107.96 165.33 9070.33 5685.45 22534.87 00:09:18.577 PCIE (0000:00:06.0) NSID 1 from core 0: 14107.96 165.33 9072.58 5342.04 23451.21 00:09:18.577 PCIE (0000:00:07.0) NSID 1 from core 0: 14107.96 165.33 9067.18 5931.68 23844.71 00:09:18.577 PCIE (0000:00:08.0) NSID 1 from core 0: 14107.96 165.33 9061.92 5871.39 25231.36 00:09:18.577 PCIE (0000:00:08.0) NSID 2 from core 0: 14107.96 165.33 9056.97 4905.30 25279.49 00:09:18.577 PCIE (0000:00:08.0) NSID 3 from core 0: 14235.06 166.82 8971.33 4253.25 16266.64 00:09:18.577 ======================================================== 00:09:18.577 Total : 84774.86 993.46 9049.93 4253.25 25279.49 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6301.538us 00:09:18.578 10.00000% : 7511.434us 00:09:18.578 25.00000% : 8065.969us 00:09:18.578 50.00000% : 8721.329us 00:09:18.578 75.00000% : 9830.400us 00:09:18.578 90.00000% : 10939.471us 00:09:18.578 95.00000% : 11796.480us 00:09:18.578 98.00000% : 13208.025us 00:09:18.578 99.00000% : 14720.394us 00:09:18.578 99.50000% : 21778.117us 00:09:18.578 99.90000% : 22282.240us 00:09:18.578 99.99000% : 22584.714us 00:09:18.578 99.99900% : 22584.714us 00:09:18.578 99.99990% : 22584.714us 00:09:18.578 99.99999% : 22584.714us 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6125.095us 00:09:18.578 10.00000% : 7259.372us 00:09:18.578 25.00000% : 7965.145us 00:09:18.578 50.00000% : 8771.742us 00:09:18.578 75.00000% : 9779.988us 00:09:18.578 90.00000% : 11040.295us 00:09:18.578 95.00000% : 12149.366us 00:09:18.578 98.00000% : 13611.323us 00:09:18.578 99.00000% : 15526.991us 00:09:18.578 99.50000% : 22181.415us 00:09:18.578 99.90000% : 23189.662us 00:09:18.578 99.99000% : 23492.135us 00:09:18.578 99.99900% : 23492.135us 00:09:18.578 99.99990% : 23492.135us 00:09:18.578 99.99999% : 23492.135us 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6301.538us 00:09:18.578 10.00000% : 7511.434us 00:09:18.578 25.00000% : 8015.557us 00:09:18.578 50.00000% : 8670.917us 00:09:18.578 75.00000% : 9729.575us 00:09:18.578 90.00000% : 10939.471us 00:09:18.578 95.00000% : 12250.191us 00:09:18.578 98.00000% : 14014.622us 00:09:18.578 99.00000% : 16131.938us 00:09:18.578 99.50000% : 22786.363us 00:09:18.578 99.90000% : 23693.785us 00:09:18.578 99.99000% : 23895.434us 00:09:18.578 99.99900% : 23895.434us 00:09:18.578 99.99990% : 23895.434us 00:09:18.578 99.99999% : 23895.434us 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6175.508us 00:09:18.578 10.00000% : 7461.022us 00:09:18.578 25.00000% : 8015.557us 00:09:18.578 50.00000% : 8670.917us 00:09:18.578 75.00000% : 9628.751us 00:09:18.578 90.00000% : 10939.471us 00:09:18.578 95.00000% : 12351.015us 00:09:18.578 98.00000% : 14216.271us 00:09:18.578 99.00000% : 15022.868us 00:09:18.578 99.50000% : 24097.083us 00:09:18.578 99.90000% : 25004.505us 00:09:18.578 99.99000% : 25306.978us 00:09:18.578 99.99900% : 25306.978us 00:09:18.578 99.99990% : 25306.978us 00:09:18.578 99.99999% : 25306.978us 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6074.683us 00:09:18.578 10.00000% : 7461.022us 00:09:18.578 25.00000% : 8065.969us 00:09:18.578 50.00000% : 8721.329us 00:09:18.578 75.00000% : 9729.575us 00:09:18.578 90.00000% : 10889.058us 00:09:18.578 95.00000% : 11998.129us 00:09:18.578 98.00000% : 13913.797us 00:09:18.578 99.00000% : 15728.640us 00:09:18.578 99.50000% : 24197.908us 00:09:18.578 99.90000% : 25105.329us 00:09:18.578 99.99000% : 25306.978us 00:09:18.578 99.99900% : 25306.978us 00:09:18.578 99.99990% : 25306.978us 00:09:18.578 99.99999% : 25306.978us 00:09:18.578 00:09:18.578 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:18.578 ================================================================================= 00:09:18.578 1.00000% : 6074.683us 00:09:18.578 10.00000% : 7511.434us 00:09:18.578 25.00000% : 8065.969us 00:09:18.578 50.00000% : 8670.917us 00:09:18.578 75.00000% : 9779.988us 00:09:18.578 90.00000% : 10889.058us 00:09:18.578 95.00000% : 11746.068us 00:09:18.578 98.00000% : 13308.849us 00:09:18.578 99.00000% : 14720.394us 00:09:18.578 99.50000% : 15426.166us 00:09:18.578 99.90000% : 16131.938us 00:09:18.578 99.99000% : 16333.588us 00:09:18.578 99.99900% : 16333.588us 00:09:18.578 99.99990% : 16333.588us 00:09:18.578 99.99999% : 16333.588us 00:09:18.578 00:09:18.578 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:18.578 ============================================================================== 00:09:18.578 Range in us Cumulative IO count 00:09:18.578 5671.385 - 5696.591: 0.0070% ( 1) 00:09:18.578 5898.240 - 5923.446: 0.0211% ( 2) 00:09:18.578 5973.858 - 5999.065: 0.0352% ( 2) 00:09:18.578 5999.065 - 6024.271: 0.0563% ( 3) 00:09:18.578 6024.271 - 6049.477: 0.0704% ( 2) 00:09:18.578 6049.477 - 6074.683: 0.1267% ( 8) 00:09:18.578 6074.683 - 6099.889: 0.1619% ( 5) 00:09:18.578 6099.889 - 6125.095: 0.2252% ( 9) 00:09:18.578 6125.095 - 6150.302: 0.2886% ( 9) 00:09:18.578 6150.302 - 6175.508: 0.3801% ( 13) 00:09:18.578 6175.508 - 6200.714: 0.5349% ( 22) 00:09:18.578 6200.714 - 6225.920: 0.6686% ( 19) 00:09:18.578 6225.920 - 6251.126: 0.7672% ( 14) 00:09:18.578 6251.126 - 6276.332: 0.9009% ( 19) 00:09:18.578 6276.332 - 6301.538: 1.2458% ( 49) 00:09:18.578 6301.538 - 6326.745: 1.3654% ( 17) 00:09:18.578 6326.745 - 6351.951: 1.4640% ( 14) 00:09:18.578 6351.951 - 6377.157: 1.8651% ( 57) 00:09:18.578 6377.157 - 6402.363: 2.1256% ( 37) 00:09:18.578 6402.363 - 6427.569: 2.2593% ( 19) 00:09:18.578 6427.569 - 6452.775: 2.4775% ( 31) 00:09:18.578 6452.775 - 6503.188: 3.6810% ( 171) 00:09:18.578 6503.188 - 6553.600: 4.0470% ( 52) 00:09:18.578 6553.600 - 6604.012: 4.2511% ( 29) 00:09:18.578 6604.012 - 6654.425: 4.7086% ( 65) 00:09:18.578 6654.425 - 6704.837: 4.8494% ( 20) 00:09:18.578 6704.837 - 6755.249: 4.9972% ( 21) 00:09:18.578 6755.249 - 6805.662: 5.1028% ( 15) 00:09:18.578 6805.662 - 6856.074: 5.2013% ( 14) 00:09:18.578 6856.074 - 6906.486: 5.2858% ( 12) 00:09:18.578 6906.486 - 6956.898: 5.3702% ( 12) 00:09:18.578 6956.898 - 7007.311: 5.4969% ( 18) 00:09:18.578 7007.311 - 7057.723: 5.6166% ( 17) 00:09:18.578 7057.723 - 7108.135: 5.7925% ( 25) 00:09:18.578 7108.135 - 7158.548: 6.1303% ( 48) 00:09:18.578 7158.548 - 7208.960: 6.4119% ( 40) 00:09:18.578 7208.960 - 7259.372: 6.7919% ( 54) 00:09:18.578 7259.372 - 7309.785: 7.2917% ( 71) 00:09:18.578 7309.785 - 7360.197: 7.8336% ( 77) 00:09:18.578 7360.197 - 7410.609: 8.6852% ( 121) 00:09:18.578 7410.609 - 7461.022: 9.4806% ( 113) 00:09:18.578 7461.022 - 7511.434: 10.4519% ( 138) 00:09:18.578 7511.434 - 7561.846: 11.5569% ( 157) 00:09:18.578 7561.846 - 7612.258: 12.6126% ( 150) 00:09:18.578 7612.258 - 7662.671: 13.7599% ( 163) 00:09:18.578 7662.671 - 7713.083: 15.2801% ( 216) 00:09:18.578 7713.083 - 7763.495: 16.7159% ( 204) 00:09:18.578 7763.495 - 7813.908: 18.2081% ( 212) 00:09:18.578 7813.908 - 7864.320: 19.6861% ( 210) 00:09:18.578 7864.320 - 7914.732: 21.1852% ( 213) 00:09:18.578 7914.732 - 7965.145: 22.6985% ( 215) 00:09:18.578 7965.145 - 8015.557: 24.3314% ( 232) 00:09:18.578 8015.557 - 8065.969: 26.0980% ( 251) 00:09:18.578 8065.969 - 8116.382: 28.0757% ( 281) 00:09:18.578 8116.382 - 8166.794: 30.1380% ( 293) 00:09:18.578 8166.794 - 8217.206: 31.9327% ( 255) 00:09:18.578 8217.206 - 8267.618: 33.7697% ( 261) 00:09:18.578 8267.618 - 8318.031: 35.8108% ( 290) 00:09:18.578 8318.031 - 8368.443: 37.6337% ( 259) 00:09:18.578 8368.443 - 8418.855: 39.4496% ( 258) 00:09:18.578 8418.855 - 8469.268: 41.4344% ( 282) 00:09:18.578 8469.268 - 8519.680: 43.3347% ( 270) 00:09:18.578 8519.680 - 8570.092: 45.1225% ( 254) 00:09:18.578 8570.092 - 8620.505: 46.8398% ( 244) 00:09:18.578 8620.505 - 8670.917: 48.5642% ( 245) 00:09:18.578 8670.917 - 8721.329: 50.3801% ( 258) 00:09:18.578 8721.329 - 8771.742: 52.1044% ( 245) 00:09:18.578 8771.742 - 8822.154: 53.6881% ( 225) 00:09:18.578 8822.154 - 8872.566: 55.2083% ( 216) 00:09:18.578 8872.566 - 8922.978: 56.6653% ( 207) 00:09:18.578 8922.978 - 8973.391: 58.1715% ( 214) 00:09:18.578 8973.391 - 9023.803: 59.6847% ( 215) 00:09:18.578 9023.803 - 9074.215: 61.0642% ( 196) 00:09:18.578 9074.215 - 9124.628: 62.3592% ( 184) 00:09:18.578 9124.628 - 9175.040: 63.5909% ( 175) 00:09:18.578 9175.040 - 9225.452: 64.6678% ( 153) 00:09:18.578 9225.452 - 9275.865: 65.6391% ( 138) 00:09:18.578 9275.865 - 9326.277: 66.6948% ( 150) 00:09:18.578 9326.277 - 9376.689: 67.6802% ( 140) 00:09:18.578 9376.689 - 9427.102: 68.6515% ( 138) 00:09:18.578 9427.102 - 9477.514: 69.5383% ( 126) 00:09:18.578 9477.514 - 9527.926: 70.5940% ( 150) 00:09:18.578 9527.926 - 9578.338: 71.3612% ( 109) 00:09:18.578 9578.338 - 9628.751: 72.1284% ( 109) 00:09:18.578 9628.751 - 9679.163: 72.9237% ( 113) 00:09:18.578 9679.163 - 9729.575: 73.6768% ( 107) 00:09:18.578 9729.575 - 9779.988: 74.3525% ( 96) 00:09:18.578 9779.988 - 9830.400: 75.0563% ( 100) 00:09:18.578 9830.400 - 9880.812: 75.7320% ( 96) 00:09:18.578 9880.812 - 9931.225: 76.6188% ( 126) 00:09:18.578 9931.225 - 9981.637: 77.4282% ( 115) 00:09:18.578 9981.637 - 10032.049: 78.2024% ( 110) 00:09:18.579 10032.049 - 10082.462: 79.0189% ( 116) 00:09:18.579 10082.462 - 10132.874: 79.8212% ( 114) 00:09:18.579 10132.874 - 10183.286: 80.5814% ( 108) 00:09:18.579 10183.286 - 10233.698: 81.3485% ( 109) 00:09:18.579 10233.698 - 10284.111: 82.1227% ( 110) 00:09:18.579 10284.111 - 10334.523: 82.9462% ( 117) 00:09:18.579 10334.523 - 10384.935: 83.7556% ( 115) 00:09:18.579 10384.935 - 10435.348: 84.4524% ( 99) 00:09:18.579 10435.348 - 10485.760: 85.1140% ( 94) 00:09:18.579 10485.760 - 10536.172: 85.7193% ( 86) 00:09:18.579 10536.172 - 10586.585: 86.3176% ( 85) 00:09:18.579 10586.585 - 10636.997: 87.0003% ( 97) 00:09:18.579 10636.997 - 10687.409: 87.6900% ( 98) 00:09:18.579 10687.409 - 10737.822: 88.3094% ( 88) 00:09:18.579 10737.822 - 10788.234: 88.8021% ( 70) 00:09:18.579 10788.234 - 10838.646: 89.3722% ( 81) 00:09:18.579 10838.646 - 10889.058: 89.8367% ( 66) 00:09:18.579 10889.058 - 10939.471: 90.3153% ( 68) 00:09:18.579 10939.471 - 10989.883: 90.7024% ( 55) 00:09:18.579 10989.883 - 11040.295: 91.0966% ( 56) 00:09:18.579 11040.295 - 11090.708: 91.4203% ( 46) 00:09:18.579 11090.708 - 11141.120: 91.7582% ( 48) 00:09:18.579 11141.120 - 11191.532: 92.1101% ( 50) 00:09:18.579 11191.532 - 11241.945: 92.4550% ( 49) 00:09:18.579 11241.945 - 11292.357: 92.7576% ( 43) 00:09:18.579 11292.357 - 11342.769: 93.0391% ( 40) 00:09:18.579 11342.769 - 11393.182: 93.3629% ( 46) 00:09:18.579 11393.182 - 11443.594: 93.6303% ( 38) 00:09:18.579 11443.594 - 11494.006: 93.8767% ( 35) 00:09:18.579 11494.006 - 11544.418: 94.1582% ( 40) 00:09:18.579 11544.418 - 11594.831: 94.3623% ( 29) 00:09:18.579 11594.831 - 11645.243: 94.5735% ( 30) 00:09:18.579 11645.243 - 11695.655: 94.7424% ( 24) 00:09:18.579 11695.655 - 11746.068: 94.9043% ( 23) 00:09:18.579 11746.068 - 11796.480: 95.0450% ( 20) 00:09:18.579 11796.480 - 11846.892: 95.1717% ( 18) 00:09:18.579 11846.892 - 11897.305: 95.3125% ( 20) 00:09:18.579 11897.305 - 11947.717: 95.4673% ( 22) 00:09:18.579 11947.717 - 11998.129: 95.5940% ( 18) 00:09:18.579 11998.129 - 12048.542: 95.6996% ( 15) 00:09:18.579 12048.542 - 12098.954: 95.8122% ( 16) 00:09:18.579 12098.954 - 12149.366: 95.9037% ( 13) 00:09:18.579 12149.366 - 12199.778: 96.0163% ( 16) 00:09:18.579 12199.778 - 12250.191: 96.1149% ( 14) 00:09:18.579 12250.191 - 12300.603: 96.2345% ( 17) 00:09:18.579 12300.603 - 12351.015: 96.3471% ( 16) 00:09:18.579 12351.015 - 12401.428: 96.4597% ( 16) 00:09:18.579 12401.428 - 12451.840: 96.5935% ( 19) 00:09:18.579 12451.840 - 12502.252: 96.7483% ( 22) 00:09:18.579 12502.252 - 12552.665: 96.9383% ( 27) 00:09:18.579 12552.665 - 12603.077: 97.0791% ( 20) 00:09:18.579 12603.077 - 12653.489: 97.2340% ( 22) 00:09:18.579 12653.489 - 12703.902: 97.3325% ( 14) 00:09:18.579 12703.902 - 12754.314: 97.4099% ( 11) 00:09:18.579 12754.314 - 12804.726: 97.4803% ( 10) 00:09:18.579 12804.726 - 12855.138: 97.5648% ( 12) 00:09:18.579 12855.138 - 12905.551: 97.6422% ( 11) 00:09:18.579 12905.551 - 13006.375: 97.7970% ( 22) 00:09:18.579 13006.375 - 13107.200: 97.9589% ( 23) 00:09:18.579 13107.200 - 13208.025: 98.0926% ( 19) 00:09:18.579 13208.025 - 13308.849: 98.2615% ( 24) 00:09:18.579 13308.849 - 13409.674: 98.4093% ( 21) 00:09:18.579 13409.674 - 13510.498: 98.4797% ( 10) 00:09:18.579 13510.498 - 13611.323: 98.5572% ( 11) 00:09:18.579 13611.323 - 13712.148: 98.6205% ( 9) 00:09:18.579 13712.148 - 13812.972: 98.7050% ( 12) 00:09:18.579 13812.972 - 13913.797: 98.7824% ( 11) 00:09:18.579 13913.797 - 14014.622: 98.8176% ( 5) 00:09:18.579 14014.622 - 14115.446: 98.8387% ( 3) 00:09:18.579 14115.446 - 14216.271: 98.8739% ( 5) 00:09:18.579 14216.271 - 14317.095: 98.9091% ( 5) 00:09:18.579 14317.095 - 14417.920: 98.9372% ( 4) 00:09:18.579 14417.920 - 14518.745: 98.9654% ( 4) 00:09:18.579 14518.745 - 14619.569: 98.9935% ( 4) 00:09:18.579 14619.569 - 14720.394: 99.0287% ( 5) 00:09:18.579 14720.394 - 14821.218: 99.0569% ( 4) 00:09:18.579 14821.218 - 14922.043: 99.0850% ( 4) 00:09:18.579 14922.043 - 15022.868: 99.0991% ( 2) 00:09:18.579 21173.169 - 21273.994: 99.1765% ( 11) 00:09:18.579 21273.994 - 21374.818: 99.2610% ( 12) 00:09:18.579 21374.818 - 21475.643: 99.3173% ( 8) 00:09:18.579 21475.643 - 21576.468: 99.3877% ( 10) 00:09:18.579 21576.468 - 21677.292: 99.4440% ( 8) 00:09:18.579 21677.292 - 21778.117: 99.6833% ( 34) 00:09:18.579 21778.117 - 21878.942: 99.7255% ( 6) 00:09:18.579 21878.942 - 21979.766: 99.7748% ( 7) 00:09:18.579 21979.766 - 22080.591: 99.8170% ( 6) 00:09:18.579 22080.591 - 22181.415: 99.8592% ( 6) 00:09:18.579 22181.415 - 22282.240: 99.9015% ( 6) 00:09:18.579 22282.240 - 22383.065: 99.9437% ( 6) 00:09:18.579 22383.065 - 22483.889: 99.9859% ( 6) 00:09:18.579 22483.889 - 22584.714: 100.0000% ( 2) 00:09:18.579 00:09:18.579 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:18.579 ============================================================================== 00:09:18.579 Range in us Cumulative IO count 00:09:18.579 5318.498 - 5343.705: 0.0070% ( 1) 00:09:18.579 5394.117 - 5419.323: 0.0141% ( 1) 00:09:18.579 5419.323 - 5444.529: 0.0211% ( 1) 00:09:18.579 5444.529 - 5469.735: 0.0282% ( 1) 00:09:18.579 5494.942 - 5520.148: 0.0563% ( 4) 00:09:18.579 5520.148 - 5545.354: 0.1056% ( 7) 00:09:18.579 5545.354 - 5570.560: 0.1548% ( 7) 00:09:18.579 5570.560 - 5595.766: 0.2041% ( 7) 00:09:18.579 5595.766 - 5620.972: 0.2675% ( 9) 00:09:18.579 5620.972 - 5646.178: 0.3308% ( 9) 00:09:18.579 5646.178 - 5671.385: 0.3941% ( 9) 00:09:18.579 5671.385 - 5696.591: 0.4645% ( 10) 00:09:18.579 5696.591 - 5721.797: 0.5631% ( 14) 00:09:18.579 5721.797 - 5747.003: 0.5842% ( 3) 00:09:18.579 5747.003 - 5772.209: 0.6264% ( 6) 00:09:18.579 5772.209 - 5797.415: 0.6546% ( 4) 00:09:18.579 5797.415 - 5822.622: 0.6898% ( 5) 00:09:18.579 5822.622 - 5847.828: 0.7249% ( 5) 00:09:18.579 5847.828 - 5873.034: 0.7390% ( 2) 00:09:18.579 5873.034 - 5898.240: 0.7742% ( 5) 00:09:18.579 5898.240 - 5923.446: 0.8164% ( 6) 00:09:18.579 5923.446 - 5948.652: 0.8305% ( 2) 00:09:18.579 5948.652 - 5973.858: 0.8657% ( 5) 00:09:18.579 5973.858 - 5999.065: 0.8868% ( 3) 00:09:18.579 5999.065 - 6024.271: 0.8939% ( 1) 00:09:18.579 6024.271 - 6049.477: 0.9220% ( 4) 00:09:18.579 6049.477 - 6074.683: 0.9642% ( 6) 00:09:18.579 6074.683 - 6099.889: 0.9783% ( 2) 00:09:18.579 6099.889 - 6125.095: 1.0135% ( 5) 00:09:18.579 6125.095 - 6150.302: 1.0417% ( 4) 00:09:18.579 6150.302 - 6175.508: 1.0909% ( 7) 00:09:18.579 6175.508 - 6200.714: 1.1754% ( 12) 00:09:18.579 6200.714 - 6225.920: 1.3021% ( 18) 00:09:18.579 6225.920 - 6251.126: 1.4499% ( 21) 00:09:18.579 6251.126 - 6276.332: 1.6610% ( 30) 00:09:18.579 6276.332 - 6301.538: 1.9003% ( 34) 00:09:18.579 6301.538 - 6326.745: 2.1115% ( 30) 00:09:18.579 6326.745 - 6351.951: 2.2171% ( 15) 00:09:18.579 6351.951 - 6377.157: 2.3226% ( 15) 00:09:18.579 6377.157 - 6402.363: 2.4001% ( 11) 00:09:18.579 6402.363 - 6427.569: 2.5056% ( 15) 00:09:18.579 6427.569 - 6452.775: 2.6182% ( 16) 00:09:18.579 6452.775 - 6503.188: 2.7801% ( 23) 00:09:18.579 6503.188 - 6553.600: 2.9420% ( 23) 00:09:18.579 6553.600 - 6604.012: 3.1320% ( 27) 00:09:18.579 6604.012 - 6654.425: 3.4840% ( 50) 00:09:18.579 6654.425 - 6704.837: 3.8851% ( 57) 00:09:18.579 6704.837 - 6755.249: 4.2863% ( 57) 00:09:18.579 6755.249 - 6805.662: 4.6453% ( 51) 00:09:18.579 6805.662 - 6856.074: 4.8775% ( 33) 00:09:18.579 6856.074 - 6906.486: 5.2154% ( 48) 00:09:18.579 6906.486 - 6956.898: 5.7432% ( 75) 00:09:18.579 6956.898 - 7007.311: 6.2782% ( 76) 00:09:18.579 7007.311 - 7057.723: 7.1157% ( 119) 00:09:18.579 7057.723 - 7108.135: 7.8829% ( 109) 00:09:18.579 7108.135 - 7158.548: 8.8119% ( 132) 00:09:18.579 7158.548 - 7208.960: 9.4102% ( 85) 00:09:18.579 7208.960 - 7259.372: 10.1774% ( 109) 00:09:18.579 7259.372 - 7309.785: 10.9797% ( 114) 00:09:18.579 7309.785 - 7360.197: 11.7258% ( 106) 00:09:18.579 7360.197 - 7410.609: 12.6760% ( 135) 00:09:18.579 7410.609 - 7461.022: 13.5276% ( 121) 00:09:18.579 7461.022 - 7511.434: 14.5270% ( 142) 00:09:18.579 7511.434 - 7561.846: 15.4420% ( 130) 00:09:18.579 7561.846 - 7612.258: 16.7089% ( 180) 00:09:18.579 7612.258 - 7662.671: 17.8984% ( 169) 00:09:18.579 7662.671 - 7713.083: 19.2427% ( 191) 00:09:18.579 7713.083 - 7763.495: 20.5588% ( 187) 00:09:18.579 7763.495 - 7813.908: 21.6779% ( 159) 00:09:18.579 7813.908 - 7864.320: 22.8674% ( 169) 00:09:18.579 7864.320 - 7914.732: 24.1765% ( 186) 00:09:18.579 7914.732 - 7965.145: 25.6968% ( 216) 00:09:18.579 7965.145 - 8015.557: 27.1256% ( 203) 00:09:18.579 8015.557 - 8065.969: 28.8499% ( 245) 00:09:18.579 8065.969 - 8116.382: 30.2717% ( 202) 00:09:18.579 8116.382 - 8166.794: 31.4963% ( 174) 00:09:18.579 8166.794 - 8217.206: 32.8266% ( 189) 00:09:18.579 8217.206 - 8267.618: 34.1568% ( 189) 00:09:18.579 8267.618 - 8318.031: 35.5856% ( 203) 00:09:18.579 8318.031 - 8368.443: 37.2044% ( 230) 00:09:18.579 8368.443 - 8418.855: 38.6191% ( 201) 00:09:18.579 8418.855 - 8469.268: 40.0056% ( 197) 00:09:18.579 8469.268 - 8519.680: 41.6033% ( 227) 00:09:18.579 8519.680 - 8570.092: 43.1658% ( 222) 00:09:18.579 8570.092 - 8620.505: 45.0521% ( 268) 00:09:18.579 8620.505 - 8670.917: 46.8680% ( 258) 00:09:18.579 8670.917 - 8721.329: 48.5220% ( 235) 00:09:18.579 8721.329 - 8771.742: 50.3308% ( 257) 00:09:18.579 8771.742 - 8822.154: 51.9707% ( 233) 00:09:18.579 8822.154 - 8872.566: 53.6036% ( 232) 00:09:18.580 8872.566 - 8922.978: 55.2506% ( 234) 00:09:18.580 8922.978 - 8973.391: 56.7990% ( 220) 00:09:18.580 8973.391 - 9023.803: 58.2911% ( 212) 00:09:18.580 9023.803 - 9074.215: 59.6776% ( 197) 00:09:18.580 9074.215 - 9124.628: 60.9305% ( 178) 00:09:18.580 9124.628 - 9175.040: 62.1270% ( 170) 00:09:18.580 9175.040 - 9225.452: 63.4291% ( 185) 00:09:18.580 9225.452 - 9275.865: 64.7382% ( 186) 00:09:18.580 9275.865 - 9326.277: 66.1458% ( 200) 00:09:18.580 9326.277 - 9376.689: 67.2931% ( 163) 00:09:18.580 9376.689 - 9427.102: 68.4966% ( 171) 00:09:18.580 9427.102 - 9477.514: 69.7072% ( 172) 00:09:18.580 9477.514 - 9527.926: 70.8474% ( 162) 00:09:18.580 9527.926 - 9578.338: 72.0650% ( 173) 00:09:18.580 9578.338 - 9628.751: 72.9870% ( 131) 00:09:18.580 9628.751 - 9679.163: 73.9020% ( 130) 00:09:18.580 9679.163 - 9729.575: 74.7889% ( 126) 00:09:18.580 9729.575 - 9779.988: 75.6546% ( 123) 00:09:18.580 9779.988 - 9830.400: 76.3795% ( 103) 00:09:18.580 9830.400 - 9880.812: 77.0693% ( 98) 00:09:18.580 9880.812 - 9931.225: 77.6745% ( 86) 00:09:18.580 9931.225 - 9981.637: 78.3502% ( 96) 00:09:18.580 9981.637 - 10032.049: 79.0118% ( 94) 00:09:18.580 10032.049 - 10082.462: 79.8001% ( 112) 00:09:18.580 10082.462 - 10132.874: 80.4899% ( 98) 00:09:18.580 10132.874 - 10183.286: 81.1022% ( 87) 00:09:18.580 10183.286 - 10233.698: 81.7075% ( 86) 00:09:18.580 10233.698 - 10284.111: 82.4043% ( 99) 00:09:18.580 10284.111 - 10334.523: 82.9955% ( 84) 00:09:18.580 10334.523 - 10384.935: 83.6501% ( 93) 00:09:18.580 10384.935 - 10435.348: 84.2765% ( 89) 00:09:18.580 10435.348 - 10485.760: 84.8818% ( 86) 00:09:18.580 10485.760 - 10536.172: 85.4730% ( 84) 00:09:18.580 10536.172 - 10586.585: 86.0642% ( 84) 00:09:18.580 10586.585 - 10636.997: 86.6273% ( 80) 00:09:18.580 10636.997 - 10687.409: 87.1762% ( 78) 00:09:18.580 10687.409 - 10737.822: 87.6126% ( 62) 00:09:18.580 10737.822 - 10788.234: 88.0983% ( 69) 00:09:18.580 10788.234 - 10838.646: 88.5065% ( 58) 00:09:18.580 10838.646 - 10889.058: 88.9358% ( 61) 00:09:18.580 10889.058 - 10939.471: 89.3651% ( 61) 00:09:18.580 10939.471 - 10989.883: 89.8015% ( 62) 00:09:18.580 10989.883 - 11040.295: 90.1253% ( 46) 00:09:18.580 11040.295 - 11090.708: 90.4279% ( 43) 00:09:18.580 11090.708 - 11141.120: 90.7306% ( 43) 00:09:18.580 11141.120 - 11191.532: 91.0403% ( 44) 00:09:18.580 11191.532 - 11241.945: 91.3922% ( 50) 00:09:18.580 11241.945 - 11292.357: 91.6948% ( 43) 00:09:18.580 11292.357 - 11342.769: 91.9975% ( 43) 00:09:18.580 11342.769 - 11393.182: 92.2790% ( 40) 00:09:18.580 11393.182 - 11443.594: 92.5042% ( 32) 00:09:18.580 11443.594 - 11494.006: 92.7717% ( 38) 00:09:18.580 11494.006 - 11544.418: 92.9969% ( 32) 00:09:18.580 11544.418 - 11594.831: 93.1940% ( 28) 00:09:18.580 11594.831 - 11645.243: 93.3910% ( 28) 00:09:18.580 11645.243 - 11695.655: 93.5881% ( 28) 00:09:18.580 11695.655 - 11746.068: 93.7852% ( 28) 00:09:18.580 11746.068 - 11796.480: 93.9752% ( 27) 00:09:18.580 11796.480 - 11846.892: 94.1723% ( 28) 00:09:18.580 11846.892 - 11897.305: 94.3412% ( 24) 00:09:18.580 11897.305 - 11947.717: 94.4961% ( 22) 00:09:18.580 11947.717 - 11998.129: 94.6087% ( 16) 00:09:18.580 11998.129 - 12048.542: 94.7565% ( 21) 00:09:18.580 12048.542 - 12098.954: 94.9113% ( 22) 00:09:18.580 12098.954 - 12149.366: 95.0169% ( 15) 00:09:18.580 12149.366 - 12199.778: 95.2280% ( 30) 00:09:18.580 12199.778 - 12250.191: 95.4040% ( 25) 00:09:18.580 12250.191 - 12300.603: 95.6081% ( 29) 00:09:18.580 12300.603 - 12351.015: 95.7559% ( 21) 00:09:18.580 12351.015 - 12401.428: 95.8896% ( 19) 00:09:18.580 12401.428 - 12451.840: 96.0234% ( 19) 00:09:18.580 12451.840 - 12502.252: 96.0867% ( 9) 00:09:18.580 12502.252 - 12552.665: 96.1852% ( 14) 00:09:18.580 12552.665 - 12603.077: 96.2627% ( 11) 00:09:18.580 12603.077 - 12653.489: 96.3542% ( 13) 00:09:18.580 12653.489 - 12703.902: 96.4386% ( 12) 00:09:18.580 12703.902 - 12754.314: 96.5231% ( 12) 00:09:18.580 12754.314 - 12804.726: 96.6075% ( 12) 00:09:18.580 12804.726 - 12855.138: 96.7131% ( 15) 00:09:18.580 12855.138 - 12905.551: 96.8257% ( 16) 00:09:18.580 12905.551 - 13006.375: 97.0158% ( 27) 00:09:18.580 13006.375 - 13107.200: 97.1847% ( 24) 00:09:18.580 13107.200 - 13208.025: 97.3255% ( 20) 00:09:18.580 13208.025 - 13308.849: 97.4944% ( 24) 00:09:18.580 13308.849 - 13409.674: 97.6492% ( 22) 00:09:18.580 13409.674 - 13510.498: 97.8463% ( 28) 00:09:18.580 13510.498 - 13611.323: 98.0293% ( 26) 00:09:18.580 13611.323 - 13712.148: 98.1630% ( 19) 00:09:18.580 13712.148 - 13812.972: 98.2967% ( 19) 00:09:18.580 13812.972 - 13913.797: 98.4305% ( 19) 00:09:18.580 13913.797 - 14014.622: 98.5572% ( 18) 00:09:18.580 14014.622 - 14115.446: 98.6416% ( 12) 00:09:18.580 14115.446 - 14216.271: 98.6979% ( 8) 00:09:18.580 14216.271 - 14317.095: 98.7401% ( 6) 00:09:18.580 14317.095 - 14417.920: 98.7542% ( 2) 00:09:18.580 14417.920 - 14518.745: 98.7824% ( 4) 00:09:18.580 14518.745 - 14619.569: 98.8035% ( 3) 00:09:18.580 14619.569 - 14720.394: 98.8387% ( 5) 00:09:18.580 14720.394 - 14821.218: 98.8528% ( 2) 00:09:18.580 14821.218 - 14922.043: 98.8739% ( 3) 00:09:18.580 14922.043 - 15022.868: 98.9020% ( 4) 00:09:18.580 15022.868 - 15123.692: 98.9231% ( 3) 00:09:18.580 15123.692 - 15224.517: 98.9443% ( 3) 00:09:18.580 15224.517 - 15325.342: 98.9654% ( 3) 00:09:18.580 15325.342 - 15426.166: 98.9865% ( 3) 00:09:18.580 15426.166 - 15526.991: 99.0076% ( 3) 00:09:18.580 15526.991 - 15627.815: 99.0287% ( 3) 00:09:18.580 15627.815 - 15728.640: 99.0498% ( 3) 00:09:18.580 15728.640 - 15829.465: 99.0709% ( 3) 00:09:18.580 15829.465 - 15930.289: 99.0991% ( 4) 00:09:18.580 21072.345 - 21173.169: 99.1273% ( 4) 00:09:18.580 21173.169 - 21273.994: 99.1624% ( 5) 00:09:18.580 21273.994 - 21374.818: 99.2047% ( 6) 00:09:18.580 21374.818 - 21475.643: 99.2469% ( 6) 00:09:18.580 21475.643 - 21576.468: 99.2821% ( 5) 00:09:18.580 21576.468 - 21677.292: 99.3314% ( 7) 00:09:18.580 21677.292 - 21778.117: 99.3736% ( 6) 00:09:18.580 21778.117 - 21878.942: 99.4158% ( 6) 00:09:18.580 21878.942 - 21979.766: 99.4510% ( 5) 00:09:18.580 21979.766 - 22080.591: 99.4932% ( 6) 00:09:18.580 22080.591 - 22181.415: 99.5355% ( 6) 00:09:18.580 22181.415 - 22282.240: 99.5847% ( 7) 00:09:18.580 22282.240 - 22383.065: 99.6129% ( 4) 00:09:18.580 22383.065 - 22483.889: 99.6551% ( 6) 00:09:18.580 22483.889 - 22584.714: 99.6833% ( 4) 00:09:18.580 22584.714 - 22685.538: 99.7185% ( 5) 00:09:18.580 22685.538 - 22786.363: 99.7537% ( 5) 00:09:18.580 22786.363 - 22887.188: 99.7889% ( 5) 00:09:18.580 22887.188 - 22988.012: 99.8311% ( 6) 00:09:18.580 22988.012 - 23088.837: 99.8663% ( 5) 00:09:18.580 23088.837 - 23189.662: 99.9015% ( 5) 00:09:18.580 23189.662 - 23290.486: 99.9507% ( 7) 00:09:18.580 23290.486 - 23391.311: 99.9859% ( 5) 00:09:18.580 23391.311 - 23492.135: 100.0000% ( 2) 00:09:18.580 00:09:18.580 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:18.580 ============================================================================== 00:09:18.580 Range in us Cumulative IO count 00:09:18.580 5923.446 - 5948.652: 0.0141% ( 2) 00:09:18.580 5948.652 - 5973.858: 0.0282% ( 2) 00:09:18.580 5973.858 - 5999.065: 0.0633% ( 5) 00:09:18.580 5999.065 - 6024.271: 0.0774% ( 2) 00:09:18.580 6024.271 - 6049.477: 0.1056% ( 4) 00:09:18.580 6049.477 - 6074.683: 0.1337% ( 4) 00:09:18.580 6074.683 - 6099.889: 0.1548% ( 3) 00:09:18.580 6099.889 - 6125.095: 0.1830% ( 4) 00:09:18.580 6125.095 - 6150.302: 0.2041% ( 3) 00:09:18.580 6150.302 - 6175.508: 0.2463% ( 6) 00:09:18.580 6175.508 - 6200.714: 0.2886% ( 6) 00:09:18.580 6200.714 - 6225.920: 0.3238% ( 5) 00:09:18.580 6225.920 - 6251.126: 0.3730% ( 7) 00:09:18.580 6251.126 - 6276.332: 0.5842% ( 30) 00:09:18.580 6276.332 - 6301.538: 1.1684% ( 83) 00:09:18.580 6301.538 - 6326.745: 1.5484% ( 54) 00:09:18.580 6326.745 - 6351.951: 1.6470% ( 14) 00:09:18.580 6351.951 - 6377.157: 1.7596% ( 16) 00:09:18.580 6377.157 - 6402.363: 1.8863% ( 18) 00:09:18.580 6402.363 - 6427.569: 2.0270% ( 20) 00:09:18.580 6427.569 - 6452.775: 2.2241% ( 28) 00:09:18.580 6452.775 - 6503.188: 3.1602% ( 133) 00:09:18.580 6503.188 - 6553.600: 4.0470% ( 126) 00:09:18.580 6553.600 - 6604.012: 4.2230% ( 25) 00:09:18.580 6604.012 - 6654.425: 4.3004% ( 11) 00:09:18.580 6654.425 - 6704.837: 4.3919% ( 13) 00:09:18.580 6704.837 - 6755.249: 4.4693% ( 11) 00:09:18.580 6755.249 - 6805.662: 4.5327% ( 9) 00:09:18.580 6805.662 - 6856.074: 4.5960% ( 9) 00:09:18.580 6856.074 - 6906.486: 4.9620% ( 52) 00:09:18.580 6906.486 - 6956.898: 5.3773% ( 59) 00:09:18.580 6956.898 - 7007.311: 5.6025% ( 32) 00:09:18.580 7007.311 - 7057.723: 5.8207% ( 31) 00:09:18.580 7057.723 - 7108.135: 6.0529% ( 33) 00:09:18.580 7108.135 - 7158.548: 6.3978% ( 49) 00:09:18.580 7158.548 - 7208.960: 6.7216% ( 46) 00:09:18.580 7208.960 - 7259.372: 7.1157% ( 56) 00:09:18.580 7259.372 - 7309.785: 7.5873% ( 67) 00:09:18.580 7309.785 - 7360.197: 8.0800% ( 70) 00:09:18.580 7360.197 - 7410.609: 8.7556% ( 96) 00:09:18.580 7410.609 - 7461.022: 9.6354% ( 125) 00:09:18.580 7461.022 - 7511.434: 10.5434% ( 129) 00:09:18.580 7511.434 - 7561.846: 11.7258% ( 168) 00:09:18.580 7561.846 - 7612.258: 13.0208% ( 184) 00:09:18.581 7612.258 - 7662.671: 14.4003% ( 196) 00:09:18.581 7662.671 - 7713.083: 15.8010% ( 199) 00:09:18.581 7713.083 - 7763.495: 17.1453% ( 191) 00:09:18.581 7763.495 - 7813.908: 18.5670% ( 202) 00:09:18.581 7813.908 - 7864.320: 20.2351% ( 237) 00:09:18.581 7864.320 - 7914.732: 21.9947% ( 250) 00:09:18.581 7914.732 - 7965.145: 23.6557% ( 236) 00:09:18.581 7965.145 - 8015.557: 25.4012% ( 248) 00:09:18.581 8015.557 - 8065.969: 27.4564% ( 292) 00:09:18.581 8065.969 - 8116.382: 29.4130% ( 278) 00:09:18.581 8116.382 - 8166.794: 31.4330% ( 287) 00:09:18.581 8166.794 - 8217.206: 33.3756% ( 276) 00:09:18.581 8217.206 - 8267.618: 35.2618% ( 268) 00:09:18.581 8267.618 - 8318.031: 37.0425% ( 253) 00:09:18.581 8318.031 - 8368.443: 38.9921% ( 277) 00:09:18.581 8368.443 - 8418.855: 40.9699% ( 281) 00:09:18.581 8418.855 - 8469.268: 42.8069% ( 261) 00:09:18.581 8469.268 - 8519.680: 44.6298% ( 259) 00:09:18.581 8519.680 - 8570.092: 46.4386% ( 257) 00:09:18.581 8570.092 - 8620.505: 48.4023% ( 279) 00:09:18.581 8620.505 - 8670.917: 50.3308% ( 274) 00:09:18.581 8670.917 - 8721.329: 52.1115% ( 253) 00:09:18.581 8721.329 - 8771.742: 53.7021% ( 226) 00:09:18.581 8771.742 - 8822.154: 55.3421% ( 233) 00:09:18.581 8822.154 - 8872.566: 56.9609% ( 230) 00:09:18.581 8872.566 - 8922.978: 58.4952% ( 218) 00:09:18.581 8922.978 - 8973.391: 59.9451% ( 206) 00:09:18.581 8973.391 - 9023.803: 61.2401% ( 184) 00:09:18.581 9023.803 - 9074.215: 62.6267% ( 197) 00:09:18.581 9074.215 - 9124.628: 63.8514% ( 174) 00:09:18.581 9124.628 - 9175.040: 65.0127% ( 165) 00:09:18.581 9175.040 - 9225.452: 66.1318% ( 159) 00:09:18.581 9225.452 - 9275.865: 67.1945% ( 151) 00:09:18.581 9275.865 - 9326.277: 68.3418% ( 163) 00:09:18.581 9326.277 - 9376.689: 69.3905% ( 149) 00:09:18.581 9376.689 - 9427.102: 70.2421% ( 121) 00:09:18.581 9427.102 - 9477.514: 71.1360% ( 127) 00:09:18.581 9477.514 - 9527.926: 71.9806% ( 120) 00:09:18.581 9527.926 - 9578.338: 72.8041% ( 117) 00:09:18.581 9578.338 - 9628.751: 73.6768% ( 124) 00:09:18.581 9628.751 - 9679.163: 74.7044% ( 146) 00:09:18.581 9679.163 - 9729.575: 75.6757% ( 138) 00:09:18.581 9729.575 - 9779.988: 76.5766% ( 128) 00:09:18.581 9779.988 - 9830.400: 77.4845% ( 129) 00:09:18.581 9830.400 - 9880.812: 78.4769% ( 141) 00:09:18.581 9880.812 - 9931.225: 79.3708% ( 127) 00:09:18.581 9931.225 - 9981.637: 80.3491% ( 139) 00:09:18.581 9981.637 - 10032.049: 81.2993% ( 135) 00:09:18.581 10032.049 - 10082.462: 82.1720% ( 124) 00:09:18.581 10082.462 - 10132.874: 82.8899% ( 102) 00:09:18.581 10132.874 - 10183.286: 83.4459% ( 79) 00:09:18.581 10183.286 - 10233.698: 83.9809% ( 76) 00:09:18.581 10233.698 - 10284.111: 84.5510% ( 81) 00:09:18.581 10284.111 - 10334.523: 85.2196% ( 95) 00:09:18.581 10334.523 - 10384.935: 85.8460% ( 89) 00:09:18.581 10384.935 - 10435.348: 86.3528% ( 72) 00:09:18.581 10435.348 - 10485.760: 86.8454% ( 70) 00:09:18.581 10485.760 - 10536.172: 87.2677% ( 60) 00:09:18.581 10536.172 - 10586.585: 87.6830% ( 59) 00:09:18.581 10586.585 - 10636.997: 88.0842% ( 57) 00:09:18.581 10636.997 - 10687.409: 88.4924% ( 58) 00:09:18.581 10687.409 - 10737.822: 88.8725% ( 54) 00:09:18.581 10737.822 - 10788.234: 89.2455% ( 53) 00:09:18.581 10788.234 - 10838.646: 89.6115% ( 52) 00:09:18.581 10838.646 - 10889.058: 89.9282% ( 45) 00:09:18.581 10889.058 - 10939.471: 90.2168% ( 41) 00:09:18.581 10939.471 - 10989.883: 90.5194% ( 43) 00:09:18.581 10989.883 - 11040.295: 90.8221% ( 43) 00:09:18.581 11040.295 - 11090.708: 91.0966% ( 39) 00:09:18.581 11090.708 - 11141.120: 91.3781% ( 40) 00:09:18.581 11141.120 - 11191.532: 91.6244% ( 35) 00:09:18.581 11191.532 - 11241.945: 91.8497% ( 32) 00:09:18.581 11241.945 - 11292.357: 92.0678% ( 31) 00:09:18.581 11292.357 - 11342.769: 92.3423% ( 39) 00:09:18.581 11342.769 - 11393.182: 92.5535% ( 30) 00:09:18.581 11393.182 - 11443.594: 92.7717% ( 31) 00:09:18.581 11443.594 - 11494.006: 92.9688% ( 28) 00:09:18.581 11494.006 - 11544.418: 93.1517% ( 26) 00:09:18.581 11544.418 - 11594.831: 93.3347% ( 26) 00:09:18.581 11594.831 - 11645.243: 93.5177% ( 26) 00:09:18.581 11645.243 - 11695.655: 93.6585% ( 20) 00:09:18.581 11695.655 - 11746.068: 93.8274% ( 24) 00:09:18.581 11746.068 - 11796.480: 93.9682% ( 20) 00:09:18.581 11796.480 - 11846.892: 94.0878% ( 17) 00:09:18.581 11846.892 - 11897.305: 94.2216% ( 19) 00:09:18.581 11897.305 - 11947.717: 94.3764% ( 22) 00:09:18.581 11947.717 - 11998.129: 94.5031% ( 18) 00:09:18.581 11998.129 - 12048.542: 94.6298% ( 18) 00:09:18.581 12048.542 - 12098.954: 94.7424% ( 16) 00:09:18.581 12098.954 - 12149.366: 94.8550% ( 16) 00:09:18.581 12149.366 - 12199.778: 94.9817% ( 18) 00:09:18.581 12199.778 - 12250.191: 95.0873% ( 15) 00:09:18.581 12250.191 - 12300.603: 95.1788% ( 13) 00:09:18.581 12300.603 - 12351.015: 95.2843% ( 15) 00:09:18.581 12351.015 - 12401.428: 95.3899% ( 15) 00:09:18.581 12401.428 - 12451.840: 95.4814% ( 13) 00:09:18.581 12451.840 - 12502.252: 95.5659% ( 12) 00:09:18.581 12502.252 - 12552.665: 95.6715% ( 15) 00:09:18.581 12552.665 - 12603.077: 95.7841% ( 16) 00:09:18.581 12603.077 - 12653.489: 95.9037% ( 17) 00:09:18.581 12653.489 - 12703.902: 96.0304% ( 18) 00:09:18.581 12703.902 - 12754.314: 96.1641% ( 19) 00:09:18.581 12754.314 - 12804.726: 96.2979% ( 19) 00:09:18.581 12804.726 - 12855.138: 96.4175% ( 17) 00:09:18.581 12855.138 - 12905.551: 96.5090% ( 13) 00:09:18.581 12905.551 - 13006.375: 96.6498% ( 20) 00:09:18.581 13006.375 - 13107.200: 96.8117% ( 23) 00:09:18.581 13107.200 - 13208.025: 96.9595% ( 21) 00:09:18.581 13208.025 - 13308.849: 97.1073% ( 21) 00:09:18.581 13308.849 - 13409.674: 97.2410% ( 19) 00:09:18.581 13409.674 - 13510.498: 97.3606% ( 17) 00:09:18.581 13510.498 - 13611.323: 97.4803% ( 17) 00:09:18.581 13611.323 - 13712.148: 97.6422% ( 23) 00:09:18.581 13712.148 - 13812.972: 97.7900% ( 21) 00:09:18.581 13812.972 - 13913.797: 97.9378% ( 21) 00:09:18.581 13913.797 - 14014.622: 98.0785% ( 20) 00:09:18.581 14014.622 - 14115.446: 98.2475% ( 24) 00:09:18.581 14115.446 - 14216.271: 98.3319% ( 12) 00:09:18.581 14216.271 - 14317.095: 98.4234% ( 13) 00:09:18.581 14317.095 - 14417.920: 98.4797% ( 8) 00:09:18.581 14417.920 - 14518.745: 98.5220% ( 6) 00:09:18.581 14518.745 - 14619.569: 98.5642% ( 6) 00:09:18.581 14619.569 - 14720.394: 98.5923% ( 4) 00:09:18.581 14720.394 - 14821.218: 98.6275% ( 5) 00:09:18.581 14821.218 - 14922.043: 98.6627% ( 5) 00:09:18.581 14922.043 - 15022.868: 98.6909% ( 4) 00:09:18.581 15022.868 - 15123.692: 98.7331% ( 6) 00:09:18.581 15123.692 - 15224.517: 98.7683% ( 5) 00:09:18.581 15224.517 - 15325.342: 98.8105% ( 6) 00:09:18.581 15325.342 - 15426.166: 98.8457% ( 5) 00:09:18.581 15426.166 - 15526.991: 98.8598% ( 2) 00:09:18.581 15526.991 - 15627.815: 98.8950% ( 5) 00:09:18.581 15627.815 - 15728.640: 98.9161% ( 3) 00:09:18.581 15728.640 - 15829.465: 98.9372% ( 3) 00:09:18.581 15829.465 - 15930.289: 98.9513% ( 2) 00:09:18.581 15930.289 - 16031.114: 98.9865% ( 5) 00:09:18.581 16031.114 - 16131.938: 99.0146% ( 4) 00:09:18.581 16131.938 - 16232.763: 99.0428% ( 4) 00:09:18.581 16232.763 - 16333.588: 99.0850% ( 6) 00:09:18.581 16333.588 - 16434.412: 99.0991% ( 2) 00:09:18.581 21778.117 - 21878.942: 99.1413% ( 6) 00:09:18.581 21878.942 - 21979.766: 99.1906% ( 7) 00:09:18.581 21979.766 - 22080.591: 99.2328% ( 6) 00:09:18.581 22080.591 - 22181.415: 99.2751% ( 6) 00:09:18.581 22181.415 - 22282.240: 99.3173% ( 6) 00:09:18.581 22282.240 - 22383.065: 99.3595% ( 6) 00:09:18.581 22383.065 - 22483.889: 99.4088% ( 7) 00:09:18.581 22483.889 - 22584.714: 99.4440% ( 5) 00:09:18.581 22584.714 - 22685.538: 99.4932% ( 7) 00:09:18.581 22685.538 - 22786.363: 99.5355% ( 6) 00:09:18.581 22786.363 - 22887.188: 99.5777% ( 6) 00:09:18.581 22887.188 - 22988.012: 99.6199% ( 6) 00:09:18.581 22988.012 - 23088.837: 99.6692% ( 7) 00:09:18.581 23088.837 - 23189.662: 99.7185% ( 7) 00:09:18.581 23189.662 - 23290.486: 99.7607% ( 6) 00:09:18.581 23290.486 - 23391.311: 99.8029% ( 6) 00:09:18.581 23391.311 - 23492.135: 99.8522% ( 7) 00:09:18.581 23492.135 - 23592.960: 99.8874% ( 5) 00:09:18.581 23592.960 - 23693.785: 99.9296% ( 6) 00:09:18.581 23693.785 - 23794.609: 99.9718% ( 6) 00:09:18.581 23794.609 - 23895.434: 100.0000% ( 4) 00:09:18.581 00:09:18.581 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:18.581 ============================================================================== 00:09:18.581 Range in us Cumulative IO count 00:09:18.581 5847.828 - 5873.034: 0.0141% ( 2) 00:09:18.581 5873.034 - 5898.240: 0.0211% ( 1) 00:09:18.581 5898.240 - 5923.446: 0.0493% ( 4) 00:09:18.581 5923.446 - 5948.652: 0.0845% ( 5) 00:09:18.581 5948.652 - 5973.858: 0.1197% ( 5) 00:09:18.581 5973.858 - 5999.065: 0.1689% ( 7) 00:09:18.581 5999.065 - 6024.271: 0.2182% ( 7) 00:09:18.581 6024.271 - 6049.477: 0.3167% ( 14) 00:09:18.581 6049.477 - 6074.683: 0.4434% ( 18) 00:09:18.581 6074.683 - 6099.889: 0.5631% ( 17) 00:09:18.581 6099.889 - 6125.095: 0.7390% ( 25) 00:09:18.581 6125.095 - 6150.302: 0.8798% ( 20) 00:09:18.581 6150.302 - 6175.508: 1.0065% ( 18) 00:09:18.581 6175.508 - 6200.714: 1.1050% ( 14) 00:09:18.581 6200.714 - 6225.920: 1.2176% ( 16) 00:09:18.581 6225.920 - 6251.126: 1.3584% ( 20) 00:09:18.581 6251.126 - 6276.332: 1.6188% ( 37) 00:09:18.581 6276.332 - 6301.538: 1.8651% ( 35) 00:09:18.581 6301.538 - 6326.745: 1.9918% ( 18) 00:09:18.581 6326.745 - 6351.951: 2.1396% ( 21) 00:09:18.581 6351.951 - 6377.157: 2.2874% ( 21) 00:09:18.581 6377.157 - 6402.363: 2.5338% ( 35) 00:09:18.582 6402.363 - 6427.569: 2.7168% ( 26) 00:09:18.582 6427.569 - 6452.775: 2.8435% ( 18) 00:09:18.582 6452.775 - 6503.188: 3.7584% ( 130) 00:09:18.582 6503.188 - 6553.600: 4.2582% ( 71) 00:09:18.582 6553.600 - 6604.012: 4.4412% ( 26) 00:09:18.582 6604.012 - 6654.425: 4.5960% ( 22) 00:09:18.582 6654.425 - 6704.837: 4.7860% ( 27) 00:09:18.582 6704.837 - 6755.249: 4.9409% ( 22) 00:09:18.582 6755.249 - 6805.662: 5.1028% ( 23) 00:09:18.582 6805.662 - 6856.074: 5.2435% ( 20) 00:09:18.582 6856.074 - 6906.486: 5.3632% ( 17) 00:09:18.582 6906.486 - 6956.898: 5.5180% ( 22) 00:09:18.582 6956.898 - 7007.311: 5.7503% ( 33) 00:09:18.582 7007.311 - 7057.723: 5.9403% ( 27) 00:09:18.582 7057.723 - 7108.135: 6.1937% ( 36) 00:09:18.582 7108.135 - 7158.548: 6.5175% ( 46) 00:09:18.582 7158.548 - 7208.960: 6.8764% ( 51) 00:09:18.582 7208.960 - 7259.372: 7.4043% ( 75) 00:09:18.582 7259.372 - 7309.785: 8.0588% ( 93) 00:09:18.582 7309.785 - 7360.197: 8.8049% ( 106) 00:09:18.582 7360.197 - 7410.609: 9.6073% ( 114) 00:09:18.582 7410.609 - 7461.022: 10.4237% ( 116) 00:09:18.582 7461.022 - 7511.434: 11.3176% ( 127) 00:09:18.582 7511.434 - 7561.846: 12.1903% ( 124) 00:09:18.582 7561.846 - 7612.258: 13.2742% ( 154) 00:09:18.582 7612.258 - 7662.671: 14.4566% ( 168) 00:09:18.582 7662.671 - 7713.083: 15.7798% ( 188) 00:09:18.582 7713.083 - 7763.495: 17.4550% ( 238) 00:09:18.582 7763.495 - 7813.908: 19.1371% ( 239) 00:09:18.582 7813.908 - 7864.320: 20.5870% ( 206) 00:09:18.582 7864.320 - 7914.732: 22.5366% ( 277) 00:09:18.582 7914.732 - 7965.145: 24.3806% ( 262) 00:09:18.582 7965.145 - 8015.557: 26.1895% ( 257) 00:09:18.582 8015.557 - 8065.969: 27.9772% ( 254) 00:09:18.582 8065.969 - 8116.382: 29.7931% ( 258) 00:09:18.582 8116.382 - 8166.794: 31.6019% ( 257) 00:09:18.582 8166.794 - 8217.206: 33.5656% ( 279) 00:09:18.582 8217.206 - 8267.618: 35.4096% ( 262) 00:09:18.582 8267.618 - 8318.031: 37.5211% ( 300) 00:09:18.582 8318.031 - 8368.443: 39.5763% ( 292) 00:09:18.582 8368.443 - 8418.855: 41.3922% ( 258) 00:09:18.582 8418.855 - 8469.268: 43.3629% ( 280) 00:09:18.582 8469.268 - 8519.680: 45.3336% ( 280) 00:09:18.582 8519.680 - 8570.092: 47.2551% ( 273) 00:09:18.582 8570.092 - 8620.505: 48.9513% ( 241) 00:09:18.582 8620.505 - 8670.917: 50.6546% ( 242) 00:09:18.582 8670.917 - 8721.329: 52.5971% ( 276) 00:09:18.582 8721.329 - 8771.742: 54.3356% ( 247) 00:09:18.582 8771.742 - 8822.154: 56.1867% ( 263) 00:09:18.582 8822.154 - 8872.566: 57.7984% ( 229) 00:09:18.582 8872.566 - 8922.978: 59.3609% ( 222) 00:09:18.582 8922.978 - 8973.391: 61.1838% ( 259) 00:09:18.582 8973.391 - 9023.803: 62.5774% ( 198) 00:09:18.582 9023.803 - 9074.215: 63.9006% ( 188) 00:09:18.582 9074.215 - 9124.628: 65.2309% ( 189) 00:09:18.582 9124.628 - 9175.040: 66.5118% ( 182) 00:09:18.582 9175.040 - 9225.452: 67.7224% ( 172) 00:09:18.582 9225.452 - 9275.865: 68.8556% ( 161) 00:09:18.582 9275.865 - 9326.277: 69.8057% ( 135) 00:09:18.582 9326.277 - 9376.689: 70.7207% ( 130) 00:09:18.582 9376.689 - 9427.102: 71.8539% ( 161) 00:09:18.582 9427.102 - 9477.514: 72.9026% ( 149) 00:09:18.582 9477.514 - 9527.926: 73.9513% ( 149) 00:09:18.582 9527.926 - 9578.338: 74.6833% ( 104) 00:09:18.582 9578.338 - 9628.751: 75.4505% ( 109) 00:09:18.582 9628.751 - 9679.163: 76.2035% ( 107) 00:09:18.582 9679.163 - 9729.575: 77.0622% ( 122) 00:09:18.582 9729.575 - 9779.988: 78.0265% ( 137) 00:09:18.582 9779.988 - 9830.400: 78.9062% ( 125) 00:09:18.582 9830.400 - 9880.812: 79.7720% ( 123) 00:09:18.582 9880.812 - 9931.225: 80.4265% ( 93) 00:09:18.582 9931.225 - 9981.637: 81.0037% ( 82) 00:09:18.582 9981.637 - 10032.049: 81.5526% ( 78) 00:09:18.582 10032.049 - 10082.462: 82.1157% ( 80) 00:09:18.582 10082.462 - 10132.874: 82.7069% ( 84) 00:09:18.582 10132.874 - 10183.286: 83.3193% ( 87) 00:09:18.582 10183.286 - 10233.698: 83.8612% ( 77) 00:09:18.582 10233.698 - 10284.111: 84.3961% ( 76) 00:09:18.582 10284.111 - 10334.523: 84.9451% ( 78) 00:09:18.582 10334.523 - 10384.935: 85.4026% ( 65) 00:09:18.582 10384.935 - 10435.348: 85.8249% ( 60) 00:09:18.582 10435.348 - 10485.760: 86.2472% ( 60) 00:09:18.582 10485.760 - 10536.172: 86.6484% ( 57) 00:09:18.582 10536.172 - 10586.585: 87.0707% ( 60) 00:09:18.582 10586.585 - 10636.997: 87.4507% ( 54) 00:09:18.582 10636.997 - 10687.409: 87.8519% ( 57) 00:09:18.582 10687.409 - 10737.822: 88.2672% ( 59) 00:09:18.582 10737.822 - 10788.234: 88.7810% ( 73) 00:09:18.582 10788.234 - 10838.646: 89.1822% ( 57) 00:09:18.582 10838.646 - 10889.058: 89.6396% ( 65) 00:09:18.582 10889.058 - 10939.471: 90.0549% ( 59) 00:09:18.582 10939.471 - 10989.883: 90.3927% ( 48) 00:09:18.582 10989.883 - 11040.295: 90.6954% ( 43) 00:09:18.582 11040.295 - 11090.708: 90.9276% ( 33) 00:09:18.582 11090.708 - 11141.120: 91.1318% ( 29) 00:09:18.582 11141.120 - 11191.532: 91.4133% ( 40) 00:09:18.582 11191.532 - 11241.945: 91.6456% ( 33) 00:09:18.582 11241.945 - 11292.357: 91.8919% ( 35) 00:09:18.582 11292.357 - 11342.769: 92.1242% ( 33) 00:09:18.582 11342.769 - 11393.182: 92.3212% ( 28) 00:09:18.582 11393.182 - 11443.594: 92.5676% ( 35) 00:09:18.582 11443.594 - 11494.006: 92.7717% ( 29) 00:09:18.582 11494.006 - 11544.418: 92.9195% ( 21) 00:09:18.582 11544.418 - 11594.831: 93.1025% ( 26) 00:09:18.582 11594.831 - 11645.243: 93.2784% ( 25) 00:09:18.582 11645.243 - 11695.655: 93.4614% ( 26) 00:09:18.582 11695.655 - 11746.068: 93.6585% ( 28) 00:09:18.582 11746.068 - 11796.480: 93.7711% ( 16) 00:09:18.582 11796.480 - 11846.892: 93.8908% ( 17) 00:09:18.582 11846.892 - 11897.305: 94.0175% ( 18) 00:09:18.582 11897.305 - 11947.717: 94.1371% ( 17) 00:09:18.582 11947.717 - 11998.129: 94.2638% ( 18) 00:09:18.582 11998.129 - 12048.542: 94.3905% ( 18) 00:09:18.582 12048.542 - 12098.954: 94.5031% ( 16) 00:09:18.582 12098.954 - 12149.366: 94.6157% ( 16) 00:09:18.582 12149.366 - 12199.778: 94.7424% ( 18) 00:09:18.582 12199.778 - 12250.191: 94.8691% ( 18) 00:09:18.582 12250.191 - 12300.603: 94.9747% ( 15) 00:09:18.582 12300.603 - 12351.015: 95.0802% ( 15) 00:09:18.582 12351.015 - 12401.428: 95.1858% ( 15) 00:09:18.582 12401.428 - 12451.840: 95.2984% ( 16) 00:09:18.582 12451.840 - 12502.252: 95.3829% ( 12) 00:09:18.582 12502.252 - 12552.665: 95.4955% ( 16) 00:09:18.582 12552.665 - 12603.077: 95.5940% ( 14) 00:09:18.582 12603.077 - 12653.489: 95.6715% ( 11) 00:09:18.582 12653.489 - 12703.902: 95.7489% ( 11) 00:09:18.582 12703.902 - 12754.314: 95.8333% ( 12) 00:09:18.582 12754.314 - 12804.726: 95.9108% ( 11) 00:09:18.582 12804.726 - 12855.138: 95.9882% ( 11) 00:09:18.582 12855.138 - 12905.551: 96.0515% ( 9) 00:09:18.582 12905.551 - 13006.375: 96.1501% ( 14) 00:09:18.582 13006.375 - 13107.200: 96.2416% ( 13) 00:09:18.582 13107.200 - 13208.025: 96.3471% ( 15) 00:09:18.582 13208.025 - 13308.849: 96.4668% ( 17) 00:09:18.582 13308.849 - 13409.674: 96.5864% ( 17) 00:09:18.582 13409.674 - 13510.498: 96.7202% ( 19) 00:09:18.582 13510.498 - 13611.323: 96.9032% ( 26) 00:09:18.582 13611.323 - 13712.148: 97.0791% ( 25) 00:09:18.582 13712.148 - 13812.972: 97.2762% ( 28) 00:09:18.582 13812.972 - 13913.797: 97.4803% ( 29) 00:09:18.582 13913.797 - 14014.622: 97.6492% ( 24) 00:09:18.582 14014.622 - 14115.446: 97.8111% ( 23) 00:09:18.582 14115.446 - 14216.271: 98.0574% ( 35) 00:09:18.582 14216.271 - 14317.095: 98.1982% ( 20) 00:09:18.582 14317.095 - 14417.920: 98.3249% ( 18) 00:09:18.582 14417.920 - 14518.745: 98.4445% ( 17) 00:09:18.582 14518.745 - 14619.569: 98.5783% ( 19) 00:09:18.582 14619.569 - 14720.394: 98.7120% ( 19) 00:09:18.582 14720.394 - 14821.218: 98.8528% ( 20) 00:09:18.582 14821.218 - 14922.043: 98.9724% ( 17) 00:09:18.582 14922.043 - 15022.868: 99.0639% ( 13) 00:09:18.582 15022.868 - 15123.692: 99.0991% ( 5) 00:09:18.582 23088.837 - 23189.662: 99.1273% ( 4) 00:09:18.582 23189.662 - 23290.486: 99.1624% ( 5) 00:09:18.583 23290.486 - 23391.311: 99.2047% ( 6) 00:09:18.583 23391.311 - 23492.135: 99.2469% ( 6) 00:09:18.583 23492.135 - 23592.960: 99.2821% ( 5) 00:09:18.583 23592.960 - 23693.785: 99.3314% ( 7) 00:09:18.583 23693.785 - 23794.609: 99.3736% ( 6) 00:09:18.583 23794.609 - 23895.434: 99.4158% ( 6) 00:09:18.583 23895.434 - 23996.258: 99.4651% ( 7) 00:09:18.583 23996.258 - 24097.083: 99.5073% ( 6) 00:09:18.583 24097.083 - 24197.908: 99.5495% ( 6) 00:09:18.583 24197.908 - 24298.732: 99.5988% ( 7) 00:09:18.583 24298.732 - 24399.557: 99.6410% ( 6) 00:09:18.583 24399.557 - 24500.382: 99.6903% ( 7) 00:09:18.583 24500.382 - 24601.206: 99.7325% ( 6) 00:09:18.583 24601.206 - 24702.031: 99.7748% ( 6) 00:09:18.583 24702.031 - 24802.855: 99.8170% ( 6) 00:09:18.583 24802.855 - 24903.680: 99.8592% ( 6) 00:09:18.583 24903.680 - 25004.505: 99.9015% ( 6) 00:09:18.583 25004.505 - 25105.329: 99.9437% ( 6) 00:09:18.583 25105.329 - 25206.154: 99.9859% ( 6) 00:09:18.583 25206.154 - 25306.978: 100.0000% ( 2) 00:09:18.583 00:09:18.583 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:18.583 ============================================================================== 00:09:18.583 Range in us Cumulative IO count 00:09:18.583 4889.994 - 4915.200: 0.0070% ( 1) 00:09:18.583 5091.643 - 5116.849: 0.0422% ( 5) 00:09:18.583 5116.849 - 5142.055: 0.1689% ( 18) 00:09:18.583 5142.055 - 5167.262: 0.2393% ( 10) 00:09:18.583 5167.262 - 5192.468: 0.3097% ( 10) 00:09:18.583 5192.468 - 5217.674: 0.3801% ( 10) 00:09:18.583 5217.674 - 5242.880: 0.4364% ( 8) 00:09:18.583 5242.880 - 5268.086: 0.4645% ( 4) 00:09:18.583 5268.086 - 5293.292: 0.4856% ( 3) 00:09:18.583 5293.292 - 5318.498: 0.4997% ( 2) 00:09:18.583 5318.498 - 5343.705: 0.5138% ( 2) 00:09:18.583 5343.705 - 5368.911: 0.5208% ( 1) 00:09:18.583 5368.911 - 5394.117: 0.5349% ( 2) 00:09:18.583 5394.117 - 5419.323: 0.5490% ( 2) 00:09:18.583 5419.323 - 5444.529: 0.5631% ( 2) 00:09:18.583 5444.529 - 5469.735: 0.5701% ( 1) 00:09:18.583 5469.735 - 5494.942: 0.5842% ( 2) 00:09:18.583 5494.942 - 5520.148: 0.5912% ( 1) 00:09:18.583 5520.148 - 5545.354: 0.6053% ( 2) 00:09:18.583 5545.354 - 5570.560: 0.6123% ( 1) 00:09:18.583 5570.560 - 5595.766: 0.6264% ( 2) 00:09:18.583 5595.766 - 5620.972: 0.6334% ( 1) 00:09:18.583 5620.972 - 5646.178: 0.6405% ( 1) 00:09:18.583 5646.178 - 5671.385: 0.6616% ( 3) 00:09:18.583 5671.385 - 5696.591: 0.6757% ( 2) 00:09:18.583 5696.591 - 5721.797: 0.6898% ( 2) 00:09:18.583 5721.797 - 5747.003: 0.6968% ( 1) 00:09:18.583 5747.003 - 5772.209: 0.7109% ( 2) 00:09:18.583 5772.209 - 5797.415: 0.7179% ( 1) 00:09:18.583 5797.415 - 5822.622: 0.7249% ( 1) 00:09:18.583 5822.622 - 5847.828: 0.7390% ( 2) 00:09:18.583 5847.828 - 5873.034: 0.7531% ( 2) 00:09:18.583 5873.034 - 5898.240: 0.7672% ( 2) 00:09:18.583 5898.240 - 5923.446: 0.7883% ( 3) 00:09:18.583 5923.446 - 5948.652: 0.8164% ( 4) 00:09:18.583 5948.652 - 5973.858: 0.8516% ( 5) 00:09:18.583 5973.858 - 5999.065: 0.8798% ( 4) 00:09:18.583 5999.065 - 6024.271: 0.9079% ( 4) 00:09:18.583 6024.271 - 6049.477: 0.9502% ( 6) 00:09:18.583 6049.477 - 6074.683: 1.0417% ( 13) 00:09:18.583 6074.683 - 6099.889: 1.1191% ( 11) 00:09:18.583 6099.889 - 6125.095: 1.1895% ( 10) 00:09:18.583 6125.095 - 6150.302: 1.2880% ( 14) 00:09:18.583 6150.302 - 6175.508: 1.3654% ( 11) 00:09:18.583 6175.508 - 6200.714: 1.4710% ( 15) 00:09:18.583 6200.714 - 6225.920: 1.5343% ( 9) 00:09:18.583 6225.920 - 6251.126: 1.7173% ( 26) 00:09:18.583 6251.126 - 6276.332: 2.0411% ( 46) 00:09:18.583 6276.332 - 6301.538: 2.1678% ( 18) 00:09:18.583 6301.538 - 6326.745: 2.2593% ( 13) 00:09:18.583 6326.745 - 6351.951: 2.3367% ( 11) 00:09:18.583 6351.951 - 6377.157: 2.4845% ( 21) 00:09:18.583 6377.157 - 6402.363: 2.8857% ( 57) 00:09:18.583 6402.363 - 6427.569: 3.5191% ( 90) 00:09:18.583 6427.569 - 6452.775: 3.9555% ( 62) 00:09:18.583 6452.775 - 6503.188: 4.1878% ( 33) 00:09:18.583 6503.188 - 6553.600: 4.3849% ( 28) 00:09:18.583 6553.600 - 6604.012: 4.5960% ( 30) 00:09:18.583 6604.012 - 6654.425: 5.1661% ( 81) 00:09:18.583 6654.425 - 6704.837: 5.3984% ( 33) 00:09:18.583 6704.837 - 6755.249: 5.7081% ( 44) 00:09:18.583 6755.249 - 6805.662: 5.8699% ( 23) 00:09:18.583 6805.662 - 6856.074: 6.0037% ( 19) 00:09:18.583 6856.074 - 6906.486: 6.1022% ( 14) 00:09:18.583 6906.486 - 6956.898: 6.2500% ( 21) 00:09:18.583 6956.898 - 7007.311: 6.3908% ( 20) 00:09:18.583 7007.311 - 7057.723: 6.6230% ( 33) 00:09:18.583 7057.723 - 7108.135: 6.8342% ( 30) 00:09:18.583 7108.135 - 7158.548: 7.0594% ( 32) 00:09:18.583 7158.548 - 7208.960: 7.4043% ( 49) 00:09:18.583 7208.960 - 7259.372: 7.8618% ( 65) 00:09:18.583 7259.372 - 7309.785: 8.3756% ( 73) 00:09:18.583 7309.785 - 7360.197: 9.0301% ( 93) 00:09:18.583 7360.197 - 7410.609: 9.6776% ( 92) 00:09:18.583 7410.609 - 7461.022: 10.3604% ( 97) 00:09:18.583 7461.022 - 7511.434: 11.2542% ( 127) 00:09:18.583 7511.434 - 7561.846: 12.2255% ( 138) 00:09:18.583 7561.846 - 7612.258: 13.3727% ( 163) 00:09:18.583 7612.258 - 7662.671: 14.5200% ( 163) 00:09:18.583 7662.671 - 7713.083: 15.6672% ( 163) 00:09:18.583 7713.083 - 7763.495: 16.9200% ( 178) 00:09:18.583 7763.495 - 7813.908: 18.3559% ( 204) 00:09:18.583 7813.908 - 7864.320: 19.8902% ( 218) 00:09:18.583 7864.320 - 7914.732: 21.4668% ( 224) 00:09:18.583 7914.732 - 7965.145: 23.1771% ( 243) 00:09:18.583 7965.145 - 8015.557: 24.9367% ( 250) 00:09:18.583 8015.557 - 8065.969: 26.7314% ( 255) 00:09:18.583 8065.969 - 8116.382: 28.5825% ( 263) 00:09:18.583 8116.382 - 8166.794: 30.9122% ( 331) 00:09:18.583 8166.794 - 8217.206: 32.7210% ( 257) 00:09:18.583 8217.206 - 8267.618: 34.6213% ( 270) 00:09:18.583 8267.618 - 8318.031: 36.5358% ( 272) 00:09:18.583 8318.031 - 8368.443: 38.6261% ( 297) 00:09:18.583 8368.443 - 8418.855: 40.4279% ( 256) 00:09:18.583 8418.855 - 8469.268: 42.3353% ( 271) 00:09:18.583 8469.268 - 8519.680: 44.2497% ( 272) 00:09:18.583 8519.680 - 8570.092: 46.0374% ( 254) 00:09:18.583 8570.092 - 8620.505: 47.8674% ( 260) 00:09:18.583 8620.505 - 8670.917: 49.7325% ( 265) 00:09:18.583 8670.917 - 8721.329: 51.7314% ( 284) 00:09:18.583 8721.329 - 8771.742: 53.6529% ( 273) 00:09:18.583 8771.742 - 8822.154: 55.2506% ( 227) 00:09:18.583 8822.154 - 8872.566: 56.7919% ( 219) 00:09:18.583 8872.566 - 8922.978: 58.4741% ( 239) 00:09:18.583 8922.978 - 8973.391: 60.0859% ( 229) 00:09:18.583 8973.391 - 9023.803: 61.5921% ( 214) 00:09:18.583 9023.803 - 9074.215: 62.9082% ( 187) 00:09:18.583 9074.215 - 9124.628: 64.2173% ( 186) 00:09:18.583 9124.628 - 9175.040: 65.4420% ( 174) 00:09:18.583 9175.040 - 9225.452: 66.4696% ( 146) 00:09:18.583 9225.452 - 9275.865: 67.4620% ( 141) 00:09:18.583 9275.865 - 9326.277: 68.4051% ( 134) 00:09:18.583 9326.277 - 9376.689: 69.3131% ( 129) 00:09:18.583 9376.689 - 9427.102: 70.2421% ( 132) 00:09:18.583 9427.102 - 9477.514: 71.1501% ( 129) 00:09:18.583 9477.514 - 9527.926: 71.9032% ( 107) 00:09:18.583 9527.926 - 9578.338: 72.6070% ( 100) 00:09:18.583 9578.338 - 9628.751: 73.3812% ( 110) 00:09:18.583 9628.751 - 9679.163: 74.1836% ( 114) 00:09:18.583 9679.163 - 9729.575: 75.0493% ( 123) 00:09:18.583 9729.575 - 9779.988: 76.0276% ( 139) 00:09:18.583 9779.988 - 9830.400: 76.9426% ( 130) 00:09:18.583 9830.400 - 9880.812: 77.7027% ( 108) 00:09:18.583 9880.812 - 9931.225: 78.5966% ( 127) 00:09:18.583 9931.225 - 9981.637: 79.5397% ( 134) 00:09:18.583 9981.637 - 10032.049: 80.3843% ( 120) 00:09:18.583 10032.049 - 10082.462: 81.0529% ( 95) 00:09:18.583 10082.462 - 10132.874: 81.8271% ( 110) 00:09:18.583 10132.874 - 10183.286: 82.5239% ( 99) 00:09:18.583 10183.286 - 10233.698: 83.3474% ( 117) 00:09:18.583 10233.698 - 10284.111: 84.0794% ( 104) 00:09:18.583 10284.111 - 10334.523: 84.6354% ( 79) 00:09:18.583 10334.523 - 10384.935: 85.1774% ( 77) 00:09:18.583 10384.935 - 10435.348: 85.7193% ( 77) 00:09:18.583 10435.348 - 10485.760: 86.2472% ( 75) 00:09:18.583 10485.760 - 10536.172: 86.7328% ( 69) 00:09:18.583 10536.172 - 10586.585: 87.3663% ( 90) 00:09:18.583 10586.585 - 10636.997: 87.9293% ( 80) 00:09:18.583 10636.997 - 10687.409: 88.4220% ( 70) 00:09:18.583 10687.409 - 10737.822: 88.8865% ( 66) 00:09:18.583 10737.822 - 10788.234: 89.2244% ( 48) 00:09:18.583 10788.234 - 10838.646: 89.5833% ( 51) 00:09:18.583 10838.646 - 10889.058: 90.0267% ( 63) 00:09:18.583 10889.058 - 10939.471: 90.3998% ( 53) 00:09:18.583 10939.471 - 10989.883: 90.7658% ( 52) 00:09:18.583 10989.883 - 11040.295: 91.1388% ( 53) 00:09:18.584 11040.295 - 11090.708: 91.5541% ( 59) 00:09:18.584 11090.708 - 11141.120: 91.9060% ( 50) 00:09:18.584 11141.120 - 11191.532: 92.2368% ( 47) 00:09:18.584 11191.532 - 11241.945: 92.5535% ( 45) 00:09:18.584 11241.945 - 11292.357: 92.8350% ( 40) 00:09:18.584 11292.357 - 11342.769: 93.0673% ( 33) 00:09:18.584 11342.769 - 11393.182: 93.2855% ( 31) 00:09:18.584 11393.182 - 11443.594: 93.4755% ( 27) 00:09:18.584 11443.594 - 11494.006: 93.7007% ( 32) 00:09:18.584 11494.006 - 11544.418: 93.8626% ( 23) 00:09:18.584 11544.418 - 11594.831: 94.0245% ( 23) 00:09:18.584 11594.831 - 11645.243: 94.1934% ( 24) 00:09:18.584 11645.243 - 11695.655: 94.3201% ( 18) 00:09:18.584 11695.655 - 11746.068: 94.4609% ( 20) 00:09:18.584 11746.068 - 11796.480: 94.5876% ( 18) 00:09:18.584 11796.480 - 11846.892: 94.7002% ( 16) 00:09:18.584 11846.892 - 11897.305: 94.8128% ( 16) 00:09:18.584 11897.305 - 11947.717: 94.9184% ( 15) 00:09:18.584 11947.717 - 11998.129: 95.0380% ( 17) 00:09:18.584 11998.129 - 12048.542: 95.1436% ( 15) 00:09:18.584 12048.542 - 12098.954: 95.2562% ( 16) 00:09:18.584 12098.954 - 12149.366: 95.3758% ( 17) 00:09:18.584 12149.366 - 12199.778: 95.4814% ( 15) 00:09:18.584 12199.778 - 12250.191: 95.5870% ( 15) 00:09:18.584 12250.191 - 12300.603: 95.6926% ( 15) 00:09:18.584 12300.603 - 12351.015: 95.8052% ( 16) 00:09:18.584 12351.015 - 12401.428: 95.8967% ( 13) 00:09:18.584 12401.428 - 12451.840: 95.9882% ( 13) 00:09:18.584 12451.840 - 12502.252: 96.0797% ( 13) 00:09:18.584 12502.252 - 12552.665: 96.1782% ( 14) 00:09:18.584 12552.665 - 12603.077: 96.2697% ( 13) 00:09:18.584 12603.077 - 12653.489: 96.3612% ( 13) 00:09:18.584 12653.489 - 12703.902: 96.4386% ( 11) 00:09:18.584 12703.902 - 12754.314: 96.5160% ( 11) 00:09:18.584 12754.314 - 12804.726: 96.6216% ( 15) 00:09:18.584 12804.726 - 12855.138: 96.7131% ( 13) 00:09:18.584 12855.138 - 12905.551: 96.8117% ( 14) 00:09:18.584 12905.551 - 13006.375: 97.0158% ( 29) 00:09:18.584 13006.375 - 13107.200: 97.1565% ( 20) 00:09:18.584 13107.200 - 13208.025: 97.2480% ( 13) 00:09:18.584 13208.025 - 13308.849: 97.3466% ( 14) 00:09:18.584 13308.849 - 13409.674: 97.4521% ( 15) 00:09:18.584 13409.674 - 13510.498: 97.6140% ( 23) 00:09:18.584 13510.498 - 13611.323: 97.7196% ( 15) 00:09:18.584 13611.323 - 13712.148: 97.8252% ( 15) 00:09:18.584 13712.148 - 13812.972: 97.9307% ( 15) 00:09:18.584 13812.972 - 13913.797: 98.0152% ( 12) 00:09:18.584 13913.797 - 14014.622: 98.0997% ( 12) 00:09:18.584 14014.622 - 14115.446: 98.1982% ( 14) 00:09:18.584 14115.446 - 14216.271: 98.3038% ( 15) 00:09:18.584 14216.271 - 14317.095: 98.4023% ( 14) 00:09:18.584 14317.095 - 14417.920: 98.4657% ( 9) 00:09:18.584 14417.920 - 14518.745: 98.5220% ( 8) 00:09:18.584 14518.745 - 14619.569: 98.5853% ( 9) 00:09:18.584 14619.569 - 14720.394: 98.6346% ( 7) 00:09:18.584 14720.394 - 14821.218: 98.6768% ( 6) 00:09:18.584 14821.218 - 14922.043: 98.7120% ( 5) 00:09:18.584 14922.043 - 15022.868: 98.7472% ( 5) 00:09:18.584 15022.868 - 15123.692: 98.7894% ( 6) 00:09:18.584 15123.692 - 15224.517: 98.8246% ( 5) 00:09:18.584 15224.517 - 15325.342: 98.8668% ( 6) 00:09:18.584 15325.342 - 15426.166: 98.9091% ( 6) 00:09:18.584 15426.166 - 15526.991: 98.9443% ( 5) 00:09:18.584 15526.991 - 15627.815: 98.9794% ( 5) 00:09:18.584 15627.815 - 15728.640: 99.0217% ( 6) 00:09:18.584 15728.640 - 15829.465: 99.0498% ( 4) 00:09:18.584 15829.465 - 15930.289: 99.0709% ( 3) 00:09:18.584 15930.289 - 16031.114: 99.0991% ( 4) 00:09:18.584 23189.662 - 23290.486: 99.1273% ( 4) 00:09:18.584 23290.486 - 23391.311: 99.1765% ( 7) 00:09:18.584 23391.311 - 23492.135: 99.2188% ( 6) 00:09:18.584 23492.135 - 23592.960: 99.2610% ( 6) 00:09:18.584 23592.960 - 23693.785: 99.3032% ( 6) 00:09:18.584 23693.785 - 23794.609: 99.3454% ( 6) 00:09:18.584 23794.609 - 23895.434: 99.3947% ( 7) 00:09:18.584 23895.434 - 23996.258: 99.4369% ( 6) 00:09:18.584 23996.258 - 24097.083: 99.4792% ( 6) 00:09:18.584 24097.083 - 24197.908: 99.5214% ( 6) 00:09:18.584 24197.908 - 24298.732: 99.5636% ( 6) 00:09:18.584 24298.732 - 24399.557: 99.6129% ( 7) 00:09:18.584 24399.557 - 24500.382: 99.6481% ( 5) 00:09:18.584 24500.382 - 24601.206: 99.6974% ( 7) 00:09:18.584 24601.206 - 24702.031: 99.7325% ( 5) 00:09:18.584 24702.031 - 24802.855: 99.7748% ( 6) 00:09:18.584 24802.855 - 24903.680: 99.8170% ( 6) 00:09:18.584 24903.680 - 25004.505: 99.8663% ( 7) 00:09:18.584 25004.505 - 25105.329: 99.9155% ( 7) 00:09:18.584 25105.329 - 25206.154: 99.9648% ( 7) 00:09:18.584 25206.154 - 25306.978: 100.0000% ( 5) 00:09:18.584 00:09:18.584 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:18.584 ============================================================================== 00:09:18.584 Range in us Cumulative IO count 00:09:18.584 4234.634 - 4259.840: 0.0070% ( 1) 00:09:18.584 4486.695 - 4511.902: 0.0279% ( 3) 00:09:18.584 4511.902 - 4537.108: 0.0907% ( 9) 00:09:18.584 4537.108 - 4562.314: 0.2023% ( 16) 00:09:18.584 4562.314 - 4587.520: 0.2999% ( 14) 00:09:18.584 4587.520 - 4612.726: 0.3906% ( 13) 00:09:18.584 4612.726 - 4637.932: 0.4953% ( 15) 00:09:18.584 4637.932 - 4663.138: 0.5232% ( 4) 00:09:18.584 4663.138 - 4688.345: 0.5441% ( 3) 00:09:18.584 4688.345 - 4713.551: 0.5580% ( 2) 00:09:18.584 4713.551 - 4738.757: 0.5720% ( 2) 00:09:18.584 4738.757 - 4763.963: 0.5859% ( 2) 00:09:18.584 4763.963 - 4789.169: 0.5999% ( 2) 00:09:18.584 4789.169 - 4814.375: 0.6208% ( 3) 00:09:18.584 4814.375 - 4839.582: 0.6348% ( 2) 00:09:18.584 4864.788 - 4889.994: 0.6487% ( 2) 00:09:18.584 4889.994 - 4915.200: 0.6627% ( 2) 00:09:18.584 4915.200 - 4940.406: 0.6766% ( 2) 00:09:18.584 4940.406 - 4965.612: 0.6836% ( 1) 00:09:18.584 4965.612 - 4990.818: 0.6975% ( 2) 00:09:18.584 4990.818 - 5016.025: 0.7115% ( 2) 00:09:18.584 5016.025 - 5041.231: 0.7254% ( 2) 00:09:18.584 5041.231 - 5066.437: 0.7324% ( 1) 00:09:18.584 5066.437 - 5091.643: 0.7464% ( 2) 00:09:18.584 5091.643 - 5116.849: 0.7603% ( 2) 00:09:18.584 5116.849 - 5142.055: 0.7743% ( 2) 00:09:18.584 5142.055 - 5167.262: 0.7882% ( 2) 00:09:18.584 5167.262 - 5192.468: 0.8022% ( 2) 00:09:18.584 5192.468 - 5217.674: 0.8161% ( 2) 00:09:18.584 5217.674 - 5242.880: 0.8301% ( 2) 00:09:18.584 5242.880 - 5268.086: 0.8440% ( 2) 00:09:18.584 5268.086 - 5293.292: 0.8580% ( 2) 00:09:18.584 5293.292 - 5318.498: 0.8719% ( 2) 00:09:18.584 5318.498 - 5343.705: 0.8789% ( 1) 00:09:18.584 5343.705 - 5368.911: 0.8929% ( 2) 00:09:18.584 5696.591 - 5721.797: 0.8998% ( 1) 00:09:18.584 5948.652 - 5973.858: 0.9208% ( 3) 00:09:18.584 5973.858 - 5999.065: 0.9487% ( 4) 00:09:18.584 5999.065 - 6024.271: 0.9626% ( 2) 00:09:18.584 6024.271 - 6049.477: 0.9766% ( 2) 00:09:18.584 6049.477 - 6074.683: 1.0114% ( 5) 00:09:18.584 6074.683 - 6099.889: 1.0742% ( 9) 00:09:18.584 6099.889 - 6125.095: 1.1579% ( 12) 00:09:18.584 6125.095 - 6150.302: 1.2207% ( 9) 00:09:18.584 6150.302 - 6175.508: 1.3044% ( 12) 00:09:18.584 6175.508 - 6200.714: 1.4160% ( 16) 00:09:18.584 6200.714 - 6225.920: 1.5206% ( 15) 00:09:18.584 6225.920 - 6251.126: 1.6462% ( 18) 00:09:18.584 6251.126 - 6276.332: 1.7439% ( 14) 00:09:18.584 6276.332 - 6301.538: 1.9531% ( 30) 00:09:18.584 6301.538 - 6326.745: 2.3507% ( 57) 00:09:18.584 6326.745 - 6351.951: 2.4902% ( 20) 00:09:18.584 6351.951 - 6377.157: 2.6855% ( 28) 00:09:18.584 6377.157 - 6402.363: 3.1041% ( 60) 00:09:18.584 6402.363 - 6427.569: 3.2994% ( 28) 00:09:18.584 6427.569 - 6452.775: 3.8086% ( 73) 00:09:18.584 6452.775 - 6503.188: 4.2899% ( 69) 00:09:18.584 6503.188 - 6553.600: 4.7224% ( 62) 00:09:18.584 6553.600 - 6604.012: 4.9526% ( 33) 00:09:18.584 6604.012 - 6654.425: 5.4618% ( 73) 00:09:18.584 6654.425 - 6704.837: 5.6431% ( 26) 00:09:18.584 6704.837 - 6755.249: 5.7826% ( 20) 00:09:18.584 6755.249 - 6805.662: 5.9361% ( 22) 00:09:18.584 6805.662 - 6856.074: 6.0477% ( 16) 00:09:18.584 6856.074 - 6906.486: 6.1523% ( 15) 00:09:18.584 6906.486 - 6956.898: 6.2291% ( 11) 00:09:18.584 6956.898 - 7007.311: 6.2919% ( 9) 00:09:18.584 7007.311 - 7057.723: 6.3407% ( 7) 00:09:18.584 7057.723 - 7108.135: 6.4872% ( 21) 00:09:18.584 7108.135 - 7158.548: 6.7313% ( 35) 00:09:18.584 7158.548 - 7208.960: 7.0173% ( 41) 00:09:18.584 7208.960 - 7259.372: 7.3730% ( 51) 00:09:18.584 7259.372 - 7309.785: 7.8753% ( 72) 00:09:18.584 7309.785 - 7360.197: 8.3775% ( 72) 00:09:18.584 7360.197 - 7410.609: 8.9355% ( 80) 00:09:18.584 7410.609 - 7461.022: 9.6261% ( 99) 00:09:18.584 7461.022 - 7511.434: 10.3795% ( 108) 00:09:18.584 7511.434 - 7561.846: 11.1886% ( 116) 00:09:18.584 7561.846 - 7612.258: 12.1512% ( 138) 00:09:18.584 7612.258 - 7662.671: 13.3161% ( 167) 00:09:18.584 7662.671 - 7713.083: 14.8438% ( 219) 00:09:18.584 7713.083 - 7763.495: 16.3016% ( 209) 00:09:18.584 7763.495 - 7813.908: 17.7665% ( 210) 00:09:18.584 7813.908 - 7864.320: 19.3569% ( 228) 00:09:18.584 7864.320 - 7914.732: 21.2193% ( 267) 00:09:18.584 7914.732 - 7965.145: 22.8864% ( 239) 00:09:18.584 7965.145 - 8015.557: 24.6164% ( 248) 00:09:18.584 8015.557 - 8065.969: 26.5067% ( 271) 00:09:18.584 8065.969 - 8116.382: 28.4110% ( 273) 00:09:18.584 8116.382 - 8166.794: 30.4269% ( 289) 00:09:18.584 8166.794 - 8217.206: 32.3103% ( 270) 00:09:18.584 8217.206 - 8267.618: 34.3680% ( 295) 00:09:18.584 8267.618 - 8318.031: 36.6071% ( 321) 00:09:18.584 8318.031 - 8368.443: 38.5951% ( 285) 00:09:18.585 8368.443 - 8418.855: 40.6250% ( 291) 00:09:18.585 8418.855 - 8469.268: 42.5363% ( 274) 00:09:18.585 8469.268 - 8519.680: 44.5871% ( 294) 00:09:18.585 8519.680 - 8570.092: 46.5332% ( 279) 00:09:18.585 8570.092 - 8620.505: 48.3747% ( 264) 00:09:18.585 8620.505 - 8670.917: 50.2860% ( 274) 00:09:18.585 8670.917 - 8721.329: 51.8973% ( 231) 00:09:18.585 8721.329 - 8771.742: 53.3831% ( 213) 00:09:18.585 8771.742 - 8822.154: 54.9037% ( 218) 00:09:18.585 8822.154 - 8872.566: 56.4523% ( 222) 00:09:18.585 8872.566 - 8922.978: 57.9520% ( 215) 00:09:18.585 8922.978 - 8973.391: 59.4727% ( 218) 00:09:18.585 8973.391 - 9023.803: 60.7073% ( 177) 00:09:18.585 9023.803 - 9074.215: 61.8583% ( 165) 00:09:18.585 9074.215 - 9124.628: 63.0720% ( 174) 00:09:18.585 9124.628 - 9175.040: 64.2229% ( 165) 00:09:18.585 9175.040 - 9225.452: 65.2344% ( 145) 00:09:18.585 9225.452 - 9275.865: 66.2109% ( 140) 00:09:18.585 9275.865 - 9326.277: 67.1247% ( 131) 00:09:18.585 9326.277 - 9376.689: 68.0385% ( 131) 00:09:18.585 9376.689 - 9427.102: 68.8965% ( 123) 00:09:18.585 9427.102 - 9477.514: 69.9428% ( 150) 00:09:18.585 9477.514 - 9527.926: 70.7868% ( 121) 00:09:18.585 9527.926 - 9578.338: 71.7704% ( 141) 00:09:18.585 9578.338 - 9628.751: 72.5935% ( 118) 00:09:18.585 9628.751 - 9679.163: 73.4235% ( 119) 00:09:18.585 9679.163 - 9729.575: 74.2048% ( 112) 00:09:18.585 9729.575 - 9779.988: 75.0279% ( 118) 00:09:18.585 9779.988 - 9830.400: 75.8719% ( 121) 00:09:18.585 9830.400 - 9880.812: 76.7718% ( 129) 00:09:18.585 9880.812 - 9931.225: 77.6367% ( 124) 00:09:18.585 9931.225 - 9981.637: 78.5017% ( 124) 00:09:18.585 9981.637 - 10032.049: 79.2480% ( 107) 00:09:18.585 10032.049 - 10082.462: 79.9874% ( 106) 00:09:18.585 10082.462 - 10132.874: 80.9501% ( 138) 00:09:18.585 10132.874 - 10183.286: 81.6476% ( 100) 00:09:18.585 10183.286 - 10233.698: 82.2963% ( 93) 00:09:18.585 10233.698 - 10284.111: 83.1194% ( 118) 00:09:18.585 10284.111 - 10334.523: 83.8030% ( 98) 00:09:18.585 10334.523 - 10384.935: 84.3820% ( 83) 00:09:18.585 10384.935 - 10435.348: 84.9819% ( 86) 00:09:18.585 10435.348 - 10485.760: 85.6236% ( 92) 00:09:18.585 10485.760 - 10536.172: 86.2444% ( 89) 00:09:18.585 10536.172 - 10586.585: 86.8652% ( 89) 00:09:18.585 10586.585 - 10636.997: 87.4721% ( 87) 00:09:18.585 10636.997 - 10687.409: 88.1348% ( 95) 00:09:18.585 10687.409 - 10737.822: 88.6091% ( 68) 00:09:18.585 10737.822 - 10788.234: 89.1323% ( 75) 00:09:18.585 10788.234 - 10838.646: 89.6205% ( 70) 00:09:18.585 10838.646 - 10889.058: 90.1158% ( 71) 00:09:18.585 10889.058 - 10939.471: 90.4994% ( 55) 00:09:18.585 10939.471 - 10989.883: 90.8970% ( 57) 00:09:18.585 10989.883 - 11040.295: 91.2946% ( 57) 00:09:18.585 11040.295 - 11090.708: 91.7062% ( 59) 00:09:18.585 11090.708 - 11141.120: 92.1526% ( 64) 00:09:18.585 11141.120 - 11191.532: 92.5153% ( 52) 00:09:18.585 11191.532 - 11241.945: 92.8781% ( 52) 00:09:18.585 11241.945 - 11292.357: 93.1710% ( 42) 00:09:18.585 11292.357 - 11342.769: 93.5059% ( 48) 00:09:18.585 11342.769 - 11393.182: 93.7430% ( 34) 00:09:18.585 11393.182 - 11443.594: 93.9732% ( 33) 00:09:18.585 11443.594 - 11494.006: 94.2174% ( 35) 00:09:18.585 11494.006 - 11544.418: 94.4196% ( 29) 00:09:18.585 11544.418 - 11594.831: 94.5940% ( 25) 00:09:18.585 11594.831 - 11645.243: 94.7893% ( 28) 00:09:18.585 11645.243 - 11695.655: 94.9568% ( 24) 00:09:18.585 11695.655 - 11746.068: 95.1311% ( 25) 00:09:18.585 11746.068 - 11796.480: 95.2776% ( 21) 00:09:18.585 11796.480 - 11846.892: 95.4171% ( 20) 00:09:18.585 11846.892 - 11897.305: 95.5427% ( 18) 00:09:18.585 11897.305 - 11947.717: 95.6822% ( 20) 00:09:18.585 11947.717 - 11998.129: 95.7799% ( 14) 00:09:18.585 11998.129 - 12048.542: 95.8984% ( 17) 00:09:18.585 12048.542 - 12098.954: 96.0170% ( 17) 00:09:18.585 12098.954 - 12149.366: 96.1356% ( 17) 00:09:18.585 12149.366 - 12199.778: 96.2472% ( 16) 00:09:18.585 12199.778 - 12250.191: 96.3658% ( 17) 00:09:18.585 12250.191 - 12300.603: 96.4634% ( 14) 00:09:18.585 12300.603 - 12351.015: 96.5541% ( 13) 00:09:18.585 12351.015 - 12401.428: 96.6239% ( 10) 00:09:18.585 12401.428 - 12451.840: 96.7215% ( 14) 00:09:18.585 12451.840 - 12502.252: 96.8052% ( 12) 00:09:18.585 12502.252 - 12552.665: 96.8820% ( 11) 00:09:18.585 12552.665 - 12603.077: 96.9936% ( 16) 00:09:18.585 12603.077 - 12653.489: 97.0982% ( 15) 00:09:18.585 12653.489 - 12703.902: 97.2028% ( 15) 00:09:18.585 12703.902 - 12754.314: 97.2866% ( 12) 00:09:18.585 12754.314 - 12804.726: 97.3703% ( 12) 00:09:18.585 12804.726 - 12855.138: 97.4470% ( 11) 00:09:18.585 12855.138 - 12905.551: 97.5237% ( 11) 00:09:18.585 12905.551 - 13006.375: 97.6632% ( 20) 00:09:18.585 13006.375 - 13107.200: 97.8167% ( 22) 00:09:18.585 13107.200 - 13208.025: 97.9353% ( 17) 00:09:18.585 13208.025 - 13308.849: 98.0190% ( 12) 00:09:18.585 13308.849 - 13409.674: 98.0818% ( 9) 00:09:18.585 13409.674 - 13510.498: 98.1445% ( 9) 00:09:18.585 13510.498 - 13611.323: 98.2073% ( 9) 00:09:18.585 13611.323 - 13712.148: 98.2701% ( 9) 00:09:18.585 13712.148 - 13812.972: 98.3329% ( 9) 00:09:18.585 13812.972 - 13913.797: 98.3956% ( 9) 00:09:18.585 13913.797 - 14014.622: 98.4654% ( 10) 00:09:18.585 14014.622 - 14115.446: 98.5282% ( 9) 00:09:18.585 14115.446 - 14216.271: 98.5979% ( 10) 00:09:18.585 14216.271 - 14317.095: 98.6468% ( 7) 00:09:18.585 14317.095 - 14417.920: 98.7374% ( 13) 00:09:18.585 14417.920 - 14518.745: 98.8421% ( 15) 00:09:18.585 14518.745 - 14619.569: 98.9537% ( 16) 00:09:18.585 14619.569 - 14720.394: 99.0374% ( 12) 00:09:18.585 14720.394 - 14821.218: 99.1071% ( 10) 00:09:18.585 14821.218 - 14922.043: 99.1769% ( 10) 00:09:18.585 14922.043 - 15022.868: 99.2467% ( 10) 00:09:18.585 15022.868 - 15123.692: 99.3234% ( 11) 00:09:18.585 15123.692 - 15224.517: 99.3931% ( 10) 00:09:18.585 15224.517 - 15325.342: 99.4629% ( 10) 00:09:18.585 15325.342 - 15426.166: 99.5326% ( 10) 00:09:18.585 15426.166 - 15526.991: 99.6024% ( 10) 00:09:18.585 15526.991 - 15627.815: 99.6791% ( 11) 00:09:18.585 15627.815 - 15728.640: 99.7419% ( 9) 00:09:18.585 15728.640 - 15829.465: 99.7907% ( 7) 00:09:18.585 15829.465 - 15930.289: 99.8396% ( 7) 00:09:18.585 15930.289 - 16031.114: 99.8884% ( 7) 00:09:18.585 16031.114 - 16131.938: 99.9302% ( 6) 00:09:18.585 16131.938 - 16232.763: 99.9791% ( 7) 00:09:18.585 16232.763 - 16333.588: 100.0000% ( 3) 00:09:18.585 00:09:18.585 00:03:32 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:18.585 00:09:18.585 real 0m2.485s 00:09:18.585 user 0m2.170s 00:09:18.585 sys 0m0.200s 00:09:18.585 00:03:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.585 00:03:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.585 ************************************ 00:09:18.585 END TEST nvme_perf 00:09:18.585 ************************************ 00:09:18.585 00:03:32 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:18.585 00:03:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:18.585 00:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.585 00:03:32 -- common/autotest_common.sh@10 -- # set +x 00:09:18.585 ************************************ 00:09:18.585 START TEST nvme_hello_world 00:09:18.585 ************************************ 00:09:18.585 00:03:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:18.585 Initializing NVMe Controllers 00:09:18.585 Attached to 0000:00:09.0 00:09:18.585 Namespace ID: 1 size: 1GB 00:09:18.585 Attached to 0000:00:06.0 00:09:18.585 Namespace ID: 1 size: 6GB 00:09:18.585 Attached to 0000:00:07.0 00:09:18.585 Namespace ID: 1 size: 5GB 00:09:18.585 Attached to 0000:00:08.0 00:09:18.585 Namespace ID: 1 size: 4GB 00:09:18.585 Namespace ID: 2 size: 4GB 00:09:18.585 Namespace ID: 3 size: 4GB 00:09:18.585 Initialization complete. 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 INFO: using host memory buffer for IO 00:09:18.585 Hello world! 00:09:18.585 00:09:18.585 real 0m0.200s 00:09:18.585 user 0m0.066s 00:09:18.585 sys 0m0.086s 00:09:18.585 00:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.585 ************************************ 00:09:18.585 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.585 END TEST nvme_hello_world 00:09:18.585 ************************************ 00:09:18.585 00:03:33 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:18.585 00:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.585 00:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.585 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.585 ************************************ 00:09:18.585 START TEST nvme_sgl 00:09:18.585 ************************************ 00:09:18.585 00:03:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:18.843 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:18.843 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:18.843 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:18.843 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:18.843 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:18.843 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:18.844 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:18.844 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:18.844 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:18.844 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:18.844 NVMe Readv/Writev Request test 00:09:18.844 Attached to 0000:00:09.0 00:09:18.844 Attached to 0000:00:06.0 00:09:18.844 Attached to 0000:00:07.0 00:09:18.844 Attached to 0000:00:08.0 00:09:18.844 0000:00:06.0: build_io_request_2 test passed 00:09:18.844 0000:00:06.0: build_io_request_4 test passed 00:09:18.844 0000:00:06.0: build_io_request_5 test passed 00:09:18.844 0000:00:06.0: build_io_request_6 test passed 00:09:18.844 0000:00:06.0: build_io_request_7 test passed 00:09:18.844 0000:00:06.0: build_io_request_10 test passed 00:09:18.844 0000:00:07.0: build_io_request_2 test passed 00:09:18.844 0000:00:07.0: build_io_request_4 test passed 00:09:18.844 0000:00:07.0: build_io_request_5 test passed 00:09:18.844 0000:00:07.0: build_io_request_6 test passed 00:09:18.844 0000:00:07.0: build_io_request_7 test passed 00:09:18.844 0000:00:07.0: build_io_request_10 test passed 00:09:18.844 Cleaning up... 00:09:18.844 00:09:18.844 real 0m0.256s 00:09:18.844 user 0m0.128s 00:09:18.844 sys 0m0.084s 00:09:18.844 00:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.844 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.844 ************************************ 00:09:18.844 END TEST nvme_sgl 00:09:18.844 ************************************ 00:09:18.844 00:03:33 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:18.844 00:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.844 00:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.844 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:18.844 ************************************ 00:09:18.844 START TEST nvme_e2edp 00:09:18.844 ************************************ 00:09:18.844 00:03:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:19.102 NVMe Write/Read with End-to-End data protection test 00:09:19.102 Attached to 0000:00:09.0 00:09:19.102 Attached to 0000:00:06.0 00:09:19.102 Attached to 0000:00:07.0 00:09:19.102 Attached to 0000:00:08.0 00:09:19.102 Cleaning up... 00:09:19.102 00:09:19.102 real 0m0.188s 00:09:19.102 user 0m0.054s 00:09:19.102 sys 0m0.083s 00:09:19.102 00:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.102 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:19.102 ************************************ 00:09:19.102 END TEST nvme_e2edp 00:09:19.102 ************************************ 00:09:19.102 00:03:33 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:19.102 00:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.102 00:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.102 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:19.102 ************************************ 00:09:19.102 START TEST nvme_reserve 00:09:19.102 ************************************ 00:09:19.102 00:03:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:19.359 ===================================================== 00:09:19.359 NVMe Controller at PCI bus 0, device 9, function 0 00:09:19.359 ===================================================== 00:09:19.359 Reservations: Not Supported 00:09:19.359 ===================================================== 00:09:19.360 NVMe Controller at PCI bus 0, device 6, function 0 00:09:19.360 ===================================================== 00:09:19.360 Reservations: Not Supported 00:09:19.360 ===================================================== 00:09:19.360 NVMe Controller at PCI bus 0, device 7, function 0 00:09:19.360 ===================================================== 00:09:19.360 Reservations: Not Supported 00:09:19.360 ===================================================== 00:09:19.360 NVMe Controller at PCI bus 0, device 8, function 0 00:09:19.360 ===================================================== 00:09:19.360 Reservations: Not Supported 00:09:19.360 Reservation test passed 00:09:19.360 00:09:19.360 real 0m0.186s 00:09:19.360 user 0m0.050s 00:09:19.360 sys 0m0.088s 00:09:19.360 00:03:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.360 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:19.360 ************************************ 00:09:19.360 END TEST nvme_reserve 00:09:19.360 ************************************ 00:09:19.360 00:03:33 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:19.360 00:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.360 00:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.360 00:03:33 -- common/autotest_common.sh@10 -- # set +x 00:09:19.360 ************************************ 00:09:19.360 START TEST nvme_err_injection 00:09:19.360 ************************************ 00:09:19.360 00:03:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:19.618 NVMe Error Injection test 00:09:19.618 Attached to 0000:00:09.0 00:09:19.618 Attached to 0000:00:06.0 00:09:19.618 Attached to 0000:00:07.0 00:09:19.618 Attached to 0000:00:08.0 00:09:19.618 0000:00:09.0: get features failed as expected 00:09:19.618 0000:00:06.0: get features failed as expected 00:09:19.618 0000:00:07.0: get features failed as expected 00:09:19.618 0000:00:08.0: get features failed as expected 00:09:19.618 0000:00:09.0: get features successfully as expected 00:09:19.618 0000:00:06.0: get features successfully as expected 00:09:19.618 0000:00:07.0: get features successfully as expected 00:09:19.618 0000:00:08.0: get features successfully as expected 00:09:19.618 0000:00:09.0: read failed as expected 00:09:19.618 0000:00:06.0: read failed as expected 00:09:19.618 0000:00:07.0: read failed as expected 00:09:19.618 0000:00:08.0: read failed as expected 00:09:19.618 0000:00:09.0: read successfully as expected 00:09:19.618 0000:00:06.0: read successfully as expected 00:09:19.618 0000:00:07.0: read successfully as expected 00:09:19.618 0000:00:08.0: read successfully as expected 00:09:19.618 Cleaning up... 00:09:19.618 00:09:19.618 real 0m0.220s 00:09:19.618 user 0m0.086s 00:09:19.618 sys 0m0.088s 00:09:19.618 00:03:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.618 00:03:34 -- common/autotest_common.sh@10 -- # set +x 00:09:19.618 ************************************ 00:09:19.618 END TEST nvme_err_injection 00:09:19.618 ************************************ 00:09:19.618 00:03:34 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:19.618 00:03:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:19.618 00:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.618 00:03:34 -- common/autotest_common.sh@10 -- # set +x 00:09:19.618 ************************************ 00:09:19.618 START TEST nvme_overhead 00:09:19.618 ************************************ 00:09:19.618 00:03:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:20.997 Initializing NVMe Controllers 00:09:20.997 Attached to 0000:00:09.0 00:09:20.997 Attached to 0000:00:06.0 00:09:20.997 Attached to 0000:00:07.0 00:09:20.997 Attached to 0000:00:08.0 00:09:20.997 Initialization complete. Launching workers. 00:09:20.997 submit (in ns) avg, min, max = 12619.4, 10486.9, 262825.4 00:09:20.997 complete (in ns) avg, min, max = 8690.1, 7143.8, 348660.0 00:09:20.997 00:09:20.997 Submit histogram 00:09:20.997 ================ 00:09:20.997 Range in us Cumulative Count 00:09:20.997 10.486 - 10.535: 0.0060% ( 1) 00:09:20.997 10.535 - 10.585: 0.0119% ( 1) 00:09:20.997 10.585 - 10.634: 0.3397% ( 55) 00:09:20.997 10.634 - 10.683: 2.1454% ( 303) 00:09:20.997 10.683 - 10.732: 6.7104% ( 766) 00:09:20.997 10.732 - 10.782: 14.2491% ( 1265) 00:09:20.997 10.782 - 10.831: 25.2741% ( 1850) 00:09:20.997 10.831 - 10.880: 36.3945% ( 1866) 00:09:20.997 10.880 - 10.929: 46.1144% ( 1631) 00:09:20.997 10.929 - 10.978: 53.4505% ( 1231) 00:09:20.997 10.978 - 11.028: 58.0274% ( 768) 00:09:20.997 11.028 - 11.077: 61.1502% ( 524) 00:09:20.997 11.077 - 11.126: 63.1883% ( 342) 00:09:20.997 11.126 - 11.175: 64.5352% ( 226) 00:09:20.997 11.175 - 11.225: 65.3933% ( 144) 00:09:20.997 11.225 - 11.274: 66.0906% ( 117) 00:09:20.997 11.274 - 11.323: 66.7402% ( 109) 00:09:20.997 11.323 - 11.372: 67.3778% ( 107) 00:09:20.997 11.372 - 11.422: 67.9321% ( 93) 00:09:20.997 11.422 - 11.471: 68.4625% ( 89) 00:09:20.997 11.471 - 11.520: 69.0405% ( 97) 00:09:20.997 11.520 - 11.569: 69.6722% ( 106) 00:09:20.997 11.569 - 11.618: 70.3218% ( 109) 00:09:20.997 11.618 - 11.668: 70.8105% ( 82) 00:09:20.997 11.668 - 11.717: 71.3111% ( 84) 00:09:20.997 11.717 - 11.766: 71.7521% ( 74) 00:09:20.997 11.766 - 11.815: 72.2110% ( 77) 00:09:20.997 11.815 - 11.865: 72.5209% ( 52) 00:09:20.997 11.865 - 11.914: 72.8010% ( 47) 00:09:20.997 11.914 - 11.963: 73.0215% ( 37) 00:09:20.997 11.963 - 12.012: 73.1883% ( 28) 00:09:20.997 12.012 - 12.062: 73.3611% ( 29) 00:09:20.997 12.062 - 12.111: 73.4744% ( 19) 00:09:20.997 12.111 - 12.160: 73.5340% ( 10) 00:09:20.997 12.160 - 12.209: 73.6293% ( 16) 00:09:20.997 12.209 - 12.258: 73.6889% ( 10) 00:09:20.997 12.258 - 12.308: 73.7187% ( 5) 00:09:20.997 12.308 - 12.357: 73.7604% ( 7) 00:09:20.997 12.357 - 12.406: 73.8141% ( 9) 00:09:20.997 12.406 - 12.455: 73.8200% ( 1) 00:09:20.997 12.455 - 12.505: 73.8319% ( 2) 00:09:20.997 12.505 - 12.554: 73.8737% ( 7) 00:09:20.997 12.554 - 12.603: 73.8975% ( 4) 00:09:20.997 12.603 - 12.702: 73.9928% ( 16) 00:09:20.997 12.702 - 12.800: 74.0346% ( 7) 00:09:20.997 12.800 - 12.898: 74.1418% ( 18) 00:09:20.997 12.898 - 12.997: 74.2849% ( 24) 00:09:20.997 12.997 - 13.095: 74.4338% ( 25) 00:09:20.997 13.095 - 13.194: 74.5471% ( 19) 00:09:20.997 13.194 - 13.292: 74.6424% ( 16) 00:09:20.997 13.292 - 13.391: 74.7676% ( 21) 00:09:20.997 13.391 - 13.489: 74.8629% ( 16) 00:09:20.997 13.489 - 13.588: 74.9464% ( 14) 00:09:20.997 13.588 - 13.686: 75.0179% ( 12) 00:09:20.997 13.686 - 13.785: 75.1311% ( 19) 00:09:20.997 13.785 - 13.883: 75.2682% ( 23) 00:09:20.997 13.883 - 13.982: 75.3814% ( 19) 00:09:20.997 13.982 - 14.080: 75.5066% ( 21) 00:09:20.997 14.080 - 14.178: 75.6317% ( 21) 00:09:20.997 14.178 - 14.277: 75.7390% ( 18) 00:09:20.997 14.277 - 14.375: 75.8522% ( 19) 00:09:20.997 14.375 - 14.474: 75.9714% ( 20) 00:09:20.997 14.474 - 14.572: 76.0787% ( 18) 00:09:20.997 14.572 - 14.671: 76.1979% ( 20) 00:09:20.997 14.671 - 14.769: 76.3647% ( 28) 00:09:20.997 14.769 - 14.868: 76.5018% ( 23) 00:09:20.997 14.868 - 14.966: 76.7521% ( 42) 00:09:20.997 14.966 - 15.065: 77.0203% ( 45) 00:09:20.997 15.065 - 15.163: 77.4076% ( 65) 00:09:20.998 15.163 - 15.262: 77.9857% ( 97) 00:09:20.998 15.262 - 15.360: 78.5399% ( 93) 00:09:20.998 15.360 - 15.458: 79.2133% ( 113) 00:09:20.998 15.458 - 15.557: 79.8033% ( 99) 00:09:20.998 15.557 - 15.655: 80.3874% ( 98) 00:09:20.998 15.655 - 15.754: 80.9535% ( 95) 00:09:20.998 15.754 - 15.852: 81.5256% ( 96) 00:09:20.998 15.852 - 15.951: 82.1752% ( 109) 00:09:20.998 15.951 - 16.049: 83.0691% ( 150) 00:09:20.998 16.049 - 16.148: 84.4636% ( 234) 00:09:20.998 16.148 - 16.246: 86.1740% ( 287) 00:09:20.998 16.246 - 16.345: 88.1168% ( 326) 00:09:20.998 16.345 - 16.443: 89.8749% ( 295) 00:09:20.998 16.443 - 16.542: 91.2932% ( 238) 00:09:20.998 16.542 - 16.640: 92.2050% ( 153) 00:09:20.998 16.640 - 16.738: 92.7116% ( 85) 00:09:20.998 16.738 - 16.837: 93.2002% ( 82) 00:09:20.998 16.837 - 16.935: 93.7008% ( 84) 00:09:20.998 16.935 - 17.034: 94.1538% ( 76) 00:09:20.998 17.034 - 17.132: 94.6722% ( 87) 00:09:20.998 17.132 - 17.231: 95.1490% ( 80) 00:09:20.998 17.231 - 17.329: 95.5662% ( 70) 00:09:20.998 17.329 - 17.428: 95.8999% ( 56) 00:09:20.998 17.428 - 17.526: 96.1561% ( 43) 00:09:20.998 17.526 - 17.625: 96.4779% ( 54) 00:09:20.998 17.625 - 17.723: 96.6865% ( 35) 00:09:20.998 17.723 - 17.822: 96.8057% ( 20) 00:09:20.998 17.822 - 17.920: 96.9487% ( 24) 00:09:20.998 17.920 - 18.018: 97.0620% ( 19) 00:09:20.998 18.018 - 18.117: 97.1454% ( 14) 00:09:20.998 18.117 - 18.215: 97.2110% ( 11) 00:09:20.998 18.215 - 18.314: 97.2825% ( 12) 00:09:20.998 18.314 - 18.412: 97.3421% ( 10) 00:09:20.998 18.412 - 18.511: 97.3838% ( 7) 00:09:20.998 18.511 - 18.609: 97.3957% ( 2) 00:09:20.998 18.609 - 18.708: 97.4374% ( 7) 00:09:20.998 18.708 - 18.806: 97.4613% ( 4) 00:09:20.998 18.806 - 18.905: 97.4911% ( 5) 00:09:20.998 18.905 - 19.003: 97.5030% ( 2) 00:09:20.998 19.003 - 19.102: 97.5209% ( 3) 00:09:20.998 19.102 - 19.200: 97.5387% ( 3) 00:09:20.998 19.200 - 19.298: 97.5447% ( 1) 00:09:20.998 19.298 - 19.397: 97.5626% ( 3) 00:09:20.998 19.397 - 19.495: 97.5685% ( 1) 00:09:20.998 19.495 - 19.594: 97.5864% ( 3) 00:09:20.998 19.594 - 19.692: 97.5983% ( 2) 00:09:20.998 19.692 - 19.791: 97.6400% ( 7) 00:09:20.998 19.791 - 19.889: 97.6460% ( 1) 00:09:20.998 19.889 - 19.988: 97.6520% ( 1) 00:09:20.998 20.086 - 20.185: 97.6758% ( 4) 00:09:20.998 20.185 - 20.283: 97.6818% ( 1) 00:09:20.998 20.283 - 20.382: 97.6937% ( 2) 00:09:20.998 20.382 - 20.480: 97.7116% ( 3) 00:09:20.998 20.480 - 20.578: 97.7294% ( 3) 00:09:20.998 20.677 - 20.775: 97.7473% ( 3) 00:09:20.998 20.775 - 20.874: 97.7592% ( 2) 00:09:20.998 20.874 - 20.972: 97.7771% ( 3) 00:09:20.998 20.972 - 21.071: 97.8069% ( 5) 00:09:20.998 21.071 - 21.169: 97.8308% ( 4) 00:09:20.998 21.169 - 21.268: 97.8665% ( 6) 00:09:20.998 21.268 - 21.366: 97.9082% ( 7) 00:09:20.998 21.366 - 21.465: 97.9619% ( 9) 00:09:20.998 21.465 - 21.563: 97.9917% ( 5) 00:09:20.998 21.563 - 21.662: 98.0453% ( 9) 00:09:20.998 21.662 - 21.760: 98.0751% ( 5) 00:09:20.998 21.760 - 21.858: 98.0870% ( 2) 00:09:20.998 21.858 - 21.957: 98.1049% ( 3) 00:09:20.998 21.957 - 22.055: 98.1108% ( 1) 00:09:20.998 22.055 - 22.154: 98.1287% ( 3) 00:09:20.998 22.154 - 22.252: 98.1347% ( 1) 00:09:20.998 22.252 - 22.351: 98.1466% ( 2) 00:09:20.998 22.351 - 22.449: 98.1526% ( 1) 00:09:20.998 22.449 - 22.548: 98.1645% ( 2) 00:09:20.998 22.548 - 22.646: 98.1704% ( 1) 00:09:20.998 22.646 - 22.745: 98.1824% ( 2) 00:09:20.998 22.843 - 22.942: 98.2002% ( 3) 00:09:20.998 23.138 - 23.237: 98.2062% ( 1) 00:09:20.998 23.237 - 23.335: 98.2122% ( 1) 00:09:20.998 23.335 - 23.434: 98.2181% ( 1) 00:09:20.998 23.532 - 23.631: 98.2479% ( 5) 00:09:20.998 23.631 - 23.729: 98.2539% ( 1) 00:09:20.998 23.729 - 23.828: 98.2598% ( 1) 00:09:20.998 23.828 - 23.926: 98.2658% ( 1) 00:09:20.998 23.926 - 24.025: 98.2777% ( 2) 00:09:20.998 24.025 - 24.123: 98.2956% ( 3) 00:09:20.998 24.123 - 24.222: 98.3015% ( 1) 00:09:20.998 24.517 - 24.615: 98.3135% ( 2) 00:09:20.998 24.615 - 24.714: 98.3254% ( 2) 00:09:20.998 24.911 - 25.009: 98.3313% ( 1) 00:09:20.998 25.108 - 25.206: 98.3433% ( 2) 00:09:20.998 25.206 - 25.403: 98.3492% ( 1) 00:09:20.998 25.403 - 25.600: 98.3611% ( 2) 00:09:20.998 25.797 - 25.994: 98.3790% ( 3) 00:09:20.998 25.994 - 26.191: 98.3850% ( 1) 00:09:20.998 26.191 - 26.388: 98.3909% ( 1) 00:09:20.998 26.388 - 26.585: 98.4148% ( 4) 00:09:20.998 26.585 - 26.782: 98.4386% ( 4) 00:09:20.998 26.782 - 26.978: 98.4803% ( 7) 00:09:20.998 26.978 - 27.175: 98.5161% ( 6) 00:09:20.998 27.175 - 27.372: 98.5936% ( 13) 00:09:20.998 27.372 - 27.569: 98.7128% ( 20) 00:09:20.998 27.569 - 27.766: 98.7485% ( 6) 00:09:20.998 27.766 - 27.963: 98.8021% ( 9) 00:09:20.998 27.963 - 28.160: 98.8319% ( 5) 00:09:20.998 28.160 - 28.357: 98.8558% ( 4) 00:09:20.998 28.357 - 28.554: 98.8677% ( 2) 00:09:20.998 28.948 - 29.145: 98.8796% ( 2) 00:09:20.998 29.145 - 29.342: 98.8856% ( 1) 00:09:20.998 30.129 - 30.326: 98.8915% ( 1) 00:09:20.998 30.326 - 30.523: 98.9154% ( 4) 00:09:20.998 30.523 - 30.720: 99.0584% ( 24) 00:09:20.998 30.720 - 30.917: 99.3325% ( 46) 00:09:20.998 30.917 - 31.114: 99.5292% ( 33) 00:09:20.998 31.114 - 31.311: 99.6305% ( 17) 00:09:20.998 31.311 - 31.508: 99.7139% ( 14) 00:09:20.998 31.508 - 31.705: 99.7557% ( 7) 00:09:20.998 31.705 - 31.902: 99.7735% ( 3) 00:09:20.998 31.902 - 32.098: 99.8033% ( 5) 00:09:20.998 32.098 - 32.295: 99.8153% ( 2) 00:09:20.998 32.295 - 32.492: 99.8331% ( 3) 00:09:20.998 32.689 - 32.886: 99.8391% ( 1) 00:09:20.998 33.477 - 33.674: 99.8451% ( 1) 00:09:20.998 35.052 - 35.249: 99.8510% ( 1) 00:09:20.998 35.840 - 36.037: 99.8570% ( 1) 00:09:20.998 36.628 - 36.825: 99.8629% ( 1) 00:09:20.998 37.218 - 37.415: 99.8689% ( 1) 00:09:20.998 37.415 - 37.612: 99.8749% ( 1) 00:09:20.998 37.612 - 37.809: 99.8808% ( 1) 00:09:20.998 38.597 - 38.794: 99.8868% ( 1) 00:09:20.998 41.551 - 41.748: 99.8927% ( 1) 00:09:20.998 42.142 - 42.338: 99.8987% ( 1) 00:09:20.998 42.535 - 42.732: 99.9046% ( 1) 00:09:20.998 43.520 - 43.717: 99.9106% ( 1) 00:09:20.998 44.111 - 44.308: 99.9166% ( 1) 00:09:20.998 44.308 - 44.505: 99.9225% ( 1) 00:09:20.998 47.262 - 47.458: 99.9285% ( 1) 00:09:20.998 48.246 - 48.443: 99.9344% ( 1) 00:09:20.998 48.837 - 49.034: 99.9404% ( 1) 00:09:20.998 50.215 - 50.412: 99.9464% ( 1) 00:09:20.998 56.320 - 56.714: 99.9523% ( 1) 00:09:20.998 57.502 - 57.895: 99.9583% ( 1) 00:09:20.998 69.711 - 70.105: 99.9642% ( 1) 00:09:20.998 70.892 - 71.286: 99.9702% ( 1) 00:09:20.998 96.886 - 97.280: 99.9762% ( 1) 00:09:20.998 104.763 - 105.551: 99.9821% ( 1) 00:09:20.998 168.566 - 169.354: 99.9881% ( 1) 00:09:20.998 200.862 - 201.649: 99.9940% ( 1) 00:09:20.998 261.514 - 263.089: 100.0000% ( 1) 00:09:20.998 00:09:20.998 Complete histogram 00:09:20.998 ================== 00:09:20.998 Range in us Cumulative Count 00:09:20.998 7.138 - 7.188: 0.1728% ( 29) 00:09:20.998 7.188 - 7.237: 2.0024% ( 307) 00:09:20.998 7.237 - 7.286: 8.0095% ( 1008) 00:09:20.998 7.286 - 7.335: 20.7092% ( 2131) 00:09:20.998 7.335 - 7.385: 37.8367% ( 2874) 00:09:20.998 7.385 - 7.434: 53.6591% ( 2655) 00:09:20.998 7.434 - 7.483: 63.2658% ( 1612) 00:09:20.998 7.483 - 7.532: 68.2479% ( 836) 00:09:20.998 7.532 - 7.582: 70.5781% ( 391) 00:09:20.998 7.582 - 7.631: 71.8296% ( 210) 00:09:20.998 7.631 - 7.680: 72.4493% ( 104) 00:09:20.998 7.680 - 7.729: 72.7771% ( 55) 00:09:20.998 7.729 - 7.778: 72.9440% ( 28) 00:09:20.998 7.778 - 7.828: 73.0334% ( 15) 00:09:20.998 7.828 - 7.877: 73.1287% ( 16) 00:09:20.998 7.877 - 7.926: 73.1883% ( 10) 00:09:20.998 7.926 - 7.975: 73.2360% ( 8) 00:09:20.998 7.975 - 8.025: 73.2896% ( 9) 00:09:20.998 8.025 - 8.074: 73.3492% ( 10) 00:09:20.998 8.074 - 8.123: 73.4744% ( 21) 00:09:20.998 8.123 - 8.172: 73.6532% ( 30) 00:09:20.998 8.172 - 8.222: 73.8796% ( 38) 00:09:20.998 8.222 - 8.271: 74.0942% ( 36) 00:09:20.998 8.271 - 8.320: 74.2312% ( 23) 00:09:20.998 8.320 - 8.369: 74.3325% ( 17) 00:09:20.998 8.369 - 8.418: 74.4398% ( 18) 00:09:20.998 8.418 - 8.468: 74.5173% ( 13) 00:09:20.998 8.468 - 8.517: 74.6186% ( 17) 00:09:20.998 8.517 - 8.566: 74.6663% ( 8) 00:09:20.998 8.566 - 8.615: 74.7139% ( 8) 00:09:20.998 8.615 - 8.665: 74.7735% ( 10) 00:09:20.998 8.665 - 8.714: 74.8331% ( 10) 00:09:20.998 8.714 - 8.763: 74.8689% ( 6) 00:09:20.998 8.763 - 8.812: 74.9046% ( 6) 00:09:20.998 8.812 - 8.862: 74.9106% ( 1) 00:09:20.999 8.862 - 8.911: 74.9166% ( 1) 00:09:20.999 8.911 - 8.960: 74.9523% ( 6) 00:09:20.999 8.960 - 9.009: 74.9702% ( 3) 00:09:20.999 9.009 - 9.058: 74.9940% ( 4) 00:09:20.999 9.058 - 9.108: 75.0119% ( 3) 00:09:20.999 9.108 - 9.157: 75.0358% ( 4) 00:09:20.999 9.157 - 9.206: 75.0596% ( 4) 00:09:20.999 9.206 - 9.255: 75.0775% ( 3) 00:09:20.999 9.255 - 9.305: 75.0894% ( 2) 00:09:20.999 9.305 - 9.354: 75.0954% ( 1) 00:09:20.999 9.354 - 9.403: 75.1192% ( 4) 00:09:20.999 9.403 - 9.452: 75.1669% ( 8) 00:09:20.999 9.452 - 9.502: 75.2086% ( 7) 00:09:20.999 9.502 - 9.551: 75.2861% ( 13) 00:09:20.999 9.551 - 9.600: 75.3159% ( 5) 00:09:20.999 9.600 - 9.649: 75.3993% ( 14) 00:09:20.999 9.649 - 9.698: 75.4648% ( 11) 00:09:20.999 9.698 - 9.748: 75.5602% ( 16) 00:09:20.999 9.748 - 9.797: 75.6436% ( 14) 00:09:20.999 9.797 - 9.846: 75.7211% ( 13) 00:09:20.999 9.846 - 9.895: 75.8045% ( 14) 00:09:20.999 9.895 - 9.945: 75.9058% ( 17) 00:09:20.999 9.945 - 9.994: 75.9833% ( 13) 00:09:20.999 9.994 - 10.043: 76.0846% ( 17) 00:09:20.999 10.043 - 10.092: 76.1561% ( 12) 00:09:20.999 10.092 - 10.142: 76.2217% ( 11) 00:09:20.999 10.142 - 10.191: 76.2992% ( 13) 00:09:20.999 10.191 - 10.240: 76.3886% ( 15) 00:09:20.999 10.240 - 10.289: 76.4601% ( 12) 00:09:20.999 10.289 - 10.338: 76.4958% ( 6) 00:09:20.999 10.338 - 10.388: 76.5673% ( 12) 00:09:20.999 10.388 - 10.437: 76.6508% ( 14) 00:09:20.999 10.437 - 10.486: 76.7759% ( 21) 00:09:20.999 10.486 - 10.535: 76.9547% ( 30) 00:09:20.999 10.535 - 10.585: 77.1514% ( 33) 00:09:20.999 10.585 - 10.634: 77.4017% ( 42) 00:09:20.999 10.634 - 10.683: 77.7414% ( 57) 00:09:20.999 10.683 - 10.732: 78.0632% ( 54) 00:09:20.999 10.732 - 10.782: 78.4327% ( 62) 00:09:20.999 10.782 - 10.831: 78.8617% ( 72) 00:09:20.999 10.831 - 10.880: 79.3266% ( 78) 00:09:20.999 10.880 - 10.929: 79.7497% ( 71) 00:09:20.999 10.929 - 10.978: 80.0834% ( 56) 00:09:20.999 10.978 - 11.028: 80.3814% ( 50) 00:09:20.999 11.028 - 11.077: 80.6555% ( 46) 00:09:20.999 11.077 - 11.126: 80.9178% ( 44) 00:09:20.999 11.126 - 11.175: 81.3766% ( 77) 00:09:20.999 11.175 - 11.225: 81.9309% ( 93) 00:09:20.999 11.225 - 11.274: 82.6341% ( 118) 00:09:20.999 11.274 - 11.323: 83.4267% ( 133) 00:09:20.999 11.323 - 11.372: 84.4517% ( 172) 00:09:20.999 11.372 - 11.422: 85.7151% ( 212) 00:09:20.999 11.422 - 11.471: 87.0560% ( 225) 00:09:20.999 11.471 - 11.520: 88.2777% ( 205) 00:09:20.999 11.520 - 11.569: 89.3564% ( 181) 00:09:20.999 11.569 - 11.618: 90.3456% ( 166) 00:09:20.999 11.618 - 11.668: 91.2574% ( 153) 00:09:20.999 11.668 - 11.717: 92.0918% ( 140) 00:09:20.999 11.717 - 11.766: 92.6937% ( 101) 00:09:20.999 11.766 - 11.815: 93.2360% ( 91) 00:09:20.999 11.815 - 11.865: 93.6353% ( 67) 00:09:20.999 11.865 - 11.914: 93.9928% ( 60) 00:09:20.999 11.914 - 11.963: 94.3445% ( 59) 00:09:20.999 11.963 - 12.012: 94.5948% ( 42) 00:09:20.999 12.012 - 12.062: 94.8272% ( 39) 00:09:20.999 12.062 - 12.111: 95.0894% ( 44) 00:09:20.999 12.111 - 12.160: 95.2741% ( 31) 00:09:20.999 12.160 - 12.209: 95.4470% ( 29) 00:09:20.999 12.209 - 12.258: 95.5840% ( 23) 00:09:20.999 12.258 - 12.308: 95.7449% ( 27) 00:09:20.999 12.308 - 12.357: 95.8462% ( 17) 00:09:20.999 12.357 - 12.406: 95.9416% ( 16) 00:09:20.999 12.406 - 12.455: 96.0250% ( 14) 00:09:20.999 12.455 - 12.505: 96.0548% ( 5) 00:09:20.999 12.505 - 12.554: 96.1263% ( 12) 00:09:20.999 12.554 - 12.603: 96.1800% ( 9) 00:09:20.999 12.603 - 12.702: 96.2813% ( 17) 00:09:20.999 12.702 - 12.800: 96.4124% ( 22) 00:09:20.999 12.800 - 12.898: 96.5435% ( 22) 00:09:20.999 12.898 - 12.997: 96.6210% ( 13) 00:09:20.999 12.997 - 13.095: 96.7044% ( 14) 00:09:20.999 13.095 - 13.194: 96.8415% ( 23) 00:09:20.999 13.194 - 13.292: 96.9607% ( 20) 00:09:20.999 13.292 - 13.391: 97.0441% ( 14) 00:09:20.999 13.391 - 13.489: 97.1692% ( 21) 00:09:20.999 13.489 - 13.588: 97.2646% ( 16) 00:09:20.999 13.588 - 13.686: 97.3302% ( 11) 00:09:20.999 13.686 - 13.785: 97.4195% ( 15) 00:09:20.999 13.785 - 13.883: 97.5030% ( 14) 00:09:20.999 13.883 - 13.982: 97.5507% ( 8) 00:09:20.999 13.982 - 14.080: 97.6222% ( 12) 00:09:20.999 14.080 - 14.178: 97.6639% ( 7) 00:09:20.999 14.178 - 14.277: 97.7116% ( 8) 00:09:20.999 14.277 - 14.375: 97.7354% ( 4) 00:09:20.999 14.375 - 14.474: 97.7592% ( 4) 00:09:20.999 14.474 - 14.572: 97.8308% ( 12) 00:09:20.999 14.572 - 14.671: 97.8367% ( 1) 00:09:20.999 14.671 - 14.769: 97.8427% ( 1) 00:09:20.999 14.769 - 14.868: 97.8665% ( 4) 00:09:20.999 14.868 - 14.966: 97.8844% ( 3) 00:09:20.999 14.966 - 15.065: 97.9082% ( 4) 00:09:20.999 15.065 - 15.163: 97.9321% ( 4) 00:09:20.999 15.163 - 15.262: 97.9380% ( 1) 00:09:20.999 15.262 - 15.360: 97.9499% ( 2) 00:09:20.999 15.360 - 15.458: 97.9559% ( 1) 00:09:20.999 15.557 - 15.655: 97.9619% ( 1) 00:09:20.999 15.655 - 15.754: 97.9678% ( 1) 00:09:20.999 15.754 - 15.852: 97.9738% ( 1) 00:09:20.999 15.852 - 15.951: 97.9797% ( 1) 00:09:20.999 16.049 - 16.148: 97.9857% ( 1) 00:09:20.999 16.148 - 16.246: 97.9917% ( 1) 00:09:20.999 16.246 - 16.345: 98.0036% ( 2) 00:09:20.999 16.542 - 16.640: 98.0215% ( 3) 00:09:20.999 16.640 - 16.738: 98.0334% ( 2) 00:09:20.999 17.231 - 17.329: 98.0393% ( 1) 00:09:20.999 17.329 - 17.428: 98.0453% ( 1) 00:09:20.999 17.428 - 17.526: 98.0513% ( 1) 00:09:20.999 17.526 - 17.625: 98.0572% ( 1) 00:09:20.999 17.723 - 17.822: 98.0691% ( 2) 00:09:20.999 17.822 - 17.920: 98.0751% ( 1) 00:09:20.999 18.117 - 18.215: 98.0930% ( 3) 00:09:20.999 18.215 - 18.314: 98.0989% ( 1) 00:09:20.999 18.708 - 18.806: 98.1168% ( 3) 00:09:20.999 18.905 - 19.003: 98.1287% ( 2) 00:09:20.999 19.003 - 19.102: 98.1347% ( 1) 00:09:20.999 19.102 - 19.200: 98.1466% ( 2) 00:09:20.999 19.298 - 19.397: 98.1704% ( 4) 00:09:20.999 19.397 - 19.495: 98.2062% ( 6) 00:09:20.999 19.495 - 19.594: 98.2658% ( 10) 00:09:20.999 19.594 - 19.692: 98.3194% ( 9) 00:09:20.999 19.692 - 19.791: 98.3492% ( 5) 00:09:20.999 19.791 - 19.889: 98.3671% ( 3) 00:09:20.999 19.889 - 19.988: 98.3731% ( 1) 00:09:20.999 20.086 - 20.185: 98.3790% ( 1) 00:09:20.999 20.185 - 20.283: 98.3850% ( 1) 00:09:20.999 20.382 - 20.480: 98.3909% ( 1) 00:09:20.999 20.480 - 20.578: 98.4088% ( 3) 00:09:20.999 20.874 - 20.972: 98.4267% ( 3) 00:09:20.999 20.972 - 21.071: 98.4327% ( 1) 00:09:20.999 21.071 - 21.169: 98.4386% ( 1) 00:09:20.999 21.169 - 21.268: 98.4505% ( 2) 00:09:20.999 21.366 - 21.465: 98.4565% ( 1) 00:09:20.999 21.465 - 21.563: 98.4625% ( 1) 00:09:20.999 21.563 - 21.662: 98.4744% ( 2) 00:09:20.999 21.662 - 21.760: 98.5042% ( 5) 00:09:20.999 21.760 - 21.858: 98.5221% ( 3) 00:09:20.999 21.858 - 21.957: 98.5638% ( 7) 00:09:20.999 21.957 - 22.055: 98.6532% ( 15) 00:09:20.999 22.055 - 22.154: 98.8379% ( 31) 00:09:20.999 22.154 - 22.252: 99.1478% ( 52) 00:09:20.999 22.252 - 22.351: 99.3385% ( 32) 00:09:20.999 22.351 - 22.449: 99.4517% ( 19) 00:09:20.999 22.449 - 22.548: 99.5232% ( 12) 00:09:20.999 22.548 - 22.646: 99.5769% ( 9) 00:09:20.999 22.646 - 22.745: 99.6067% ( 5) 00:09:20.999 22.745 - 22.843: 99.6484% ( 7) 00:09:20.999 22.843 - 22.942: 99.6544% ( 1) 00:09:20.999 22.942 - 23.040: 99.7080% ( 9) 00:09:20.999 23.040 - 23.138: 99.7139% ( 1) 00:09:20.999 23.138 - 23.237: 99.7378% ( 4) 00:09:20.999 23.237 - 23.335: 99.7616% ( 4) 00:09:20.999 23.335 - 23.434: 99.7676% ( 1) 00:09:20.999 23.434 - 23.532: 99.7855% ( 3) 00:09:20.999 23.631 - 23.729: 99.7914% ( 1) 00:09:20.999 23.729 - 23.828: 99.7974% ( 1) 00:09:20.999 23.926 - 24.025: 99.8033% ( 1) 00:09:20.999 24.123 - 24.222: 99.8153% ( 2) 00:09:20.999 24.714 - 24.812: 99.8212% ( 1) 00:09:20.999 24.812 - 24.911: 99.8272% ( 1) 00:09:20.999 25.797 - 25.994: 99.8331% ( 1) 00:09:20.999 26.191 - 26.388: 99.8391% ( 1) 00:09:20.999 27.175 - 27.372: 99.8451% ( 1) 00:09:20.999 27.569 - 27.766: 99.8570% ( 2) 00:09:20.999 27.766 - 27.963: 99.8689% ( 2) 00:09:20.999 27.963 - 28.160: 99.8808% ( 2) 00:09:20.999 28.160 - 28.357: 99.8927% ( 2) 00:09:20.999 32.492 - 32.689: 99.8987% ( 1) 00:09:20.999 33.280 - 33.477: 99.9046% ( 1) 00:09:20.999 35.446 - 35.643: 99.9106% ( 1) 00:09:20.999 37.415 - 37.612: 99.9166% ( 1) 00:09:20.999 38.203 - 38.400: 99.9225% ( 1) 00:09:20.999 38.400 - 38.597: 99.9285% ( 1) 00:09:20.999 38.794 - 38.991: 99.9344% ( 1) 00:09:20.999 39.188 - 39.385: 99.9464% ( 2) 00:09:20.999 39.975 - 40.172: 99.9523% ( 1) 00:09:20.999 40.369 - 40.566: 99.9642% ( 2) 00:09:20.999 46.474 - 46.671: 99.9702% ( 1) 00:09:21.000 61.440 - 61.834: 99.9762% ( 1) 00:09:21.000 66.954 - 67.348: 99.9821% ( 1) 00:09:21.000 70.105 - 70.498: 99.9881% ( 1) 00:09:21.000 241.034 - 242.609: 99.9940% ( 1) 00:09:21.000 348.160 - 349.735: 100.0000% ( 1) 00:09:21.000 00:09:21.000 00:09:21.000 real 0m1.214s 00:09:21.000 user 0m1.063s 00:09:21.000 sys 0m0.087s 00:09:21.000 00:03:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.000 00:03:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 ************************************ 00:09:21.000 END TEST nvme_overhead 00:09:21.000 ************************************ 00:09:21.000 00:03:35 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:21.000 00:03:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:21.000 00:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.000 00:03:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 ************************************ 00:09:21.000 START TEST nvme_arbitration 00:09:21.000 ************************************ 00:09:21.000 00:03:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:24.302 Initializing NVMe Controllers 00:09:24.302 Attached to 0000:00:09.0 00:09:24.302 Attached to 0000:00:06.0 00:09:24.302 Attached to 0000:00:07.0 00:09:24.302 Attached to 0000:00:08.0 00:09:24.302 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:24.302 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:24.302 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:24.302 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:24.302 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:24.302 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:24.302 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:24.302 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:24.302 Initialization complete. Launching workers. 00:09:24.302 Starting thread on core 1 with urgent priority queue 00:09:24.302 Starting thread on core 2 with urgent priority queue 00:09:24.302 Starting thread on core 3 with urgent priority queue 00:09:24.302 Starting thread on core 0 with urgent priority queue 00:09:24.302 QEMU NVMe Ctrl (12343 ) core 0: 6421.33 IO/s 15.57 secs/100000 ios 00:09:24.302 QEMU NVMe Ctrl (12342 ) core 0: 6421.33 IO/s 15.57 secs/100000 ios 00:09:24.302 QEMU NVMe Ctrl (12340 ) core 1: 6421.33 IO/s 15.57 secs/100000 ios 00:09:24.302 QEMU NVMe Ctrl (12342 ) core 1: 6421.33 IO/s 15.57 secs/100000 ios 00:09:24.302 QEMU NVMe Ctrl (12341 ) core 2: 6080.00 IO/s 16.45 secs/100000 ios 00:09:24.302 QEMU NVMe Ctrl (12342 ) core 3: 5952.00 IO/s 16.80 secs/100000 ios 00:09:24.302 ======================================================== 00:09:24.302 00:09:24.302 00:09:24.302 real 0m3.208s 00:09:24.302 user 0m9.020s 00:09:24.302 sys 0m0.099s 00:09:24.302 ************************************ 00:09:24.302 END TEST nvme_arbitration 00:09:24.302 00:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.302 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.302 ************************************ 00:09:24.302 00:03:38 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:24.302 00:03:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:24.302 00:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.302 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.302 ************************************ 00:09:24.302 START TEST nvme_single_aen 00:09:24.302 ************************************ 00:09:24.302 00:03:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:24.302 [2024-11-28 00:03:38.599112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.302 [2024-11-28 00:03:38.599188] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.302 [2024-11-28 00:03:38.732646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:24.302 [2024-11-28 00:03:38.735267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:24.302 [2024-11-28 00:03:38.737224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:24.302 [2024-11-28 00:03:38.739615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:24.302 Asynchronous Event Request test 00:09:24.302 Attached to 0000:00:09.0 00:09:24.302 Attached to 0000:00:06.0 00:09:24.302 Attached to 0000:00:07.0 00:09:24.302 Attached to 0000:00:08.0 00:09:24.302 Reset controller to setup AER completions for this process 00:09:24.302 Registering asynchronous event callbacks... 00:09:24.302 Getting orig temperature thresholds of all controllers 00:09:24.302 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.302 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.302 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.302 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.302 Setting all controllers temperature threshold low to trigger AER 00:09:24.302 Waiting for all controllers temperature threshold to be set lower 00:09:24.302 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.302 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:24.302 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.302 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:24.302 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.302 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:24.302 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.302 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:24.302 Waiting for all controllers to trigger AER and reset threshold 00:09:24.302 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.302 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.302 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.302 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.302 Cleaning up... 00:09:24.302 00:09:24.302 real 0m0.214s 00:09:24.302 user 0m0.078s 00:09:24.302 sys 0m0.085s 00:09:24.302 00:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.302 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.302 ************************************ 00:09:24.302 END TEST nvme_single_aen 00:09:24.302 ************************************ 00:09:24.302 00:03:38 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:24.302 00:03:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.302 00:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.302 00:03:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.302 ************************************ 00:09:24.302 START TEST nvme_doorbell_aers 00:09:24.302 ************************************ 00:09:24.302 00:03:38 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:24.302 00:03:38 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:24.302 00:03:38 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:24.302 00:03:38 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:24.302 00:03:38 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:24.302 00:03:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:24.302 00:03:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:24.302 00:03:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:24.302 00:03:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:24.302 00:03:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:24.564 00:03:38 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:24.564 00:03:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:24.564 00:03:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:24.564 00:03:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:24.564 [2024-11-28 00:03:39.114215] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:09:34.571 Executing: test_write_invalid_db 00:09:34.571 Waiting for AER completion... 00:09:34.571 Failure: test_write_invalid_db 00:09:34.571 00:09:34.571 Executing: test_invalid_db_write_overflow_sq 00:09:34.571 Waiting for AER completion... 00:09:34.571 Failure: test_invalid_db_write_overflow_sq 00:09:34.571 00:09:34.571 Executing: test_invalid_db_write_overflow_cq 00:09:34.571 Waiting for AER completion... 00:09:34.571 Failure: test_invalid_db_write_overflow_cq 00:09:34.571 00:09:34.571 00:03:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:34.571 00:03:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:34.571 [2024-11-28 00:03:49.116044] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:09:44.544 Executing: test_write_invalid_db 00:09:44.545 Waiting for AER completion... 00:09:44.545 Failure: test_write_invalid_db 00:09:44.545 00:09:44.545 Executing: test_invalid_db_write_overflow_sq 00:09:44.545 Waiting for AER completion... 00:09:44.545 Failure: test_invalid_db_write_overflow_sq 00:09:44.545 00:09:44.545 Executing: test_invalid_db_write_overflow_cq 00:09:44.545 Waiting for AER completion... 00:09:44.545 Failure: test_invalid_db_write_overflow_cq 00:09:44.545 00:09:44.545 00:03:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:44.545 00:03:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:44.545 [2024-11-28 00:03:59.142706] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:09:54.586 Executing: test_write_invalid_db 00:09:54.586 Waiting for AER completion... 00:09:54.586 Failure: test_write_invalid_db 00:09:54.586 00:09:54.586 Executing: test_invalid_db_write_overflow_sq 00:09:54.586 Waiting for AER completion... 00:09:54.586 Failure: test_invalid_db_write_overflow_sq 00:09:54.586 00:09:54.586 Executing: test_invalid_db_write_overflow_cq 00:09:54.586 Waiting for AER completion... 00:09:54.587 Failure: test_invalid_db_write_overflow_cq 00:09:54.587 00:09:54.587 00:04:08 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:54.587 00:04:08 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:54.587 [2024-11-28 00:04:09.181818] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.555 Executing: test_write_invalid_db 00:10:04.555 Waiting for AER completion... 00:10:04.555 Failure: test_write_invalid_db 00:10:04.555 00:10:04.555 Executing: test_invalid_db_write_overflow_sq 00:10:04.555 Waiting for AER completion... 00:10:04.555 Failure: test_invalid_db_write_overflow_sq 00:10:04.555 00:10:04.555 Executing: test_invalid_db_write_overflow_cq 00:10:04.555 Waiting for AER completion... 00:10:04.555 Failure: test_invalid_db_write_overflow_cq 00:10:04.555 00:10:04.555 00:10:04.555 real 0m40.180s 00:10:04.555 user 0m34.099s 00:10:04.555 sys 0m5.682s 00:10:04.555 00:04:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:04.555 ************************************ 00:10:04.555 END TEST nvme_doorbell_aers 00:10:04.555 ************************************ 00:10:04.555 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 00:04:19 -- nvme/nvme.sh@97 -- # uname 00:10:04.555 00:04:19 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:04.555 00:04:19 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:04.555 00:04:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:04.555 00:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.555 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 ************************************ 00:10:04.555 START TEST nvme_multi_aen 00:10:04.555 ************************************ 00:10:04.555 00:04:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:04.555 [2024-11-28 00:04:19.097789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.555 [2024-11-28 00:04:19.097857] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.813 [2024-11-28 00:04:19.229810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:04.813 [2024-11-28 00:04:19.229854] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.229882] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.229890] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.231163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:04.813 [2024-11-28 00:04:19.231184] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.231201] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.231209] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.232186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:04.813 [2024-11-28 00:04:19.232203] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.232219] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.232226] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.233146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:04.813 [2024-11-28 00:04:19.233162] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.233177] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.233184] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75559) is not found. Dropping the request. 00:10:04.813 [2024-11-28 00:04:19.242499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.813 Child process pid: 76079 00:10:04.813 [2024-11-28 00:04:19.242578] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.072 [Child] Asynchronous Event Request test 00:10:05.072 [Child] Attached to 0000:00:09.0 00:10:05.072 [Child] Attached to 0000:00:06.0 00:10:05.072 [Child] Attached to 0000:00:07.0 00:10:05.072 [Child] Attached to 0000:00:08.0 00:10:05.072 [Child] Registering asynchronous event callbacks... 00:10:05.072 [Child] Getting orig temperature thresholds of all controllers 00:10:05.072 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:05.072 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 [Child] Cleaning up... 00:10:05.072 Asynchronous Event Request test 00:10:05.072 Attached to 0000:00:09.0 00:10:05.072 Attached to 0000:00:06.0 00:10:05.072 Attached to 0000:00:07.0 00:10:05.072 Attached to 0000:00:08.0 00:10:05.072 Reset controller to setup AER completions for this process 00:10:05.072 Registering asynchronous event callbacks... 00:10:05.072 Getting orig temperature thresholds of all controllers 00:10:05.072 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:05.072 Setting all controllers temperature threshold low to trigger AER 00:10:05.072 Waiting for all controllers temperature threshold to be set lower 00:10:05.072 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:05.072 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:05.072 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:05.072 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:05.072 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:05.072 Waiting for all controllers to trigger AER and reset threshold 00:10:05.072 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.072 Cleaning up... 00:10:05.072 00:10:05.072 real 0m0.424s 00:10:05.072 user 0m0.111s 00:10:05.072 sys 0m0.172s 00:10:05.072 00:04:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.072 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:05.072 ************************************ 00:10:05.072 END TEST nvme_multi_aen 00:10:05.072 ************************************ 00:10:05.072 00:04:19 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:05.072 00:04:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:05.072 00:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.072 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:05.072 ************************************ 00:10:05.072 START TEST nvme_startup 00:10:05.072 ************************************ 00:10:05.072 00:04:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:05.332 Initializing NVMe Controllers 00:10:05.332 Attached to 0000:00:09.0 00:10:05.332 Attached to 0000:00:06.0 00:10:05.332 Attached to 0000:00:07.0 00:10:05.332 Attached to 0000:00:08.0 00:10:05.332 Initialization complete. 00:10:05.332 Time used:122795.789 (us). 00:10:05.332 00:10:05.332 real 0m0.174s 00:10:05.332 user 0m0.048s 00:10:05.332 sys 0m0.083s 00:10:05.332 00:04:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.332 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:05.332 ************************************ 00:10:05.332 END TEST nvme_startup 00:10:05.332 ************************************ 00:10:05.332 00:04:19 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:05.332 00:04:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:05.332 00:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.332 00:04:19 -- common/autotest_common.sh@10 -- # set +x 00:10:05.332 ************************************ 00:10:05.332 START TEST nvme_multi_secondary 00:10:05.332 ************************************ 00:10:05.332 00:04:19 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:05.332 00:04:19 -- nvme/nvme.sh@52 -- # pid0=76135 00:10:05.332 00:04:19 -- nvme/nvme.sh@54 -- # pid1=76136 00:10:05.332 00:04:19 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:05.332 00:04:19 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:05.332 00:04:19 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:08.656 Initializing NVMe Controllers 00:10:08.656 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:08.656 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:08.656 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:08.656 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:08.656 Initialization complete. Launching workers. 00:10:08.656 ======================================================== 00:10:08.656 Latency(us) 00:10:08.656 Device Information : IOPS MiB/s Average min max 00:10:08.656 PCIE (0000:00:09.0) NSID 1 from core 1: 6566.70 25.65 2436.08 792.97 6090.72 00:10:08.656 PCIE (0000:00:06.0) NSID 1 from core 1: 6566.70 25.65 2435.10 791.12 5908.07 00:10:08.656 PCIE (0000:00:07.0) NSID 1 from core 1: 6566.70 25.65 2436.17 805.04 5972.89 00:10:08.656 PCIE (0000:00:08.0) NSID 1 from core 1: 6566.70 25.65 2436.17 782.84 6844.33 00:10:08.656 PCIE (0000:00:08.0) NSID 2 from core 1: 6566.70 25.65 2436.31 774.66 7332.24 00:10:08.656 PCIE (0000:00:08.0) NSID 3 from core 1: 6566.70 25.65 2436.31 781.35 6294.95 00:10:08.656 ======================================================== 00:10:08.656 Total : 39400.18 153.91 2436.02 774.66 7332.24 00:10:08.656 00:10:08.656 Initializing NVMe Controllers 00:10:08.656 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:08.656 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:08.656 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:08.656 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:08.656 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:08.656 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:08.656 Initialization complete. Launching workers. 00:10:08.656 ======================================================== 00:10:08.656 Latency(us) 00:10:08.656 Device Information : IOPS MiB/s Average min max 00:10:08.656 PCIE (0000:00:09.0) NSID 1 from core 2: 2686.13 10.49 5955.95 1076.84 16138.15 00:10:08.656 PCIE (0000:00:06.0) NSID 1 from core 2: 2686.13 10.49 5955.15 964.13 17078.94 00:10:08.656 PCIE (0000:00:07.0) NSID 1 from core 2: 2686.13 10.49 5956.00 1223.17 17038.34 00:10:08.656 PCIE (0000:00:08.0) NSID 1 from core 2: 2686.13 10.49 5955.89 1203.17 15258.96 00:10:08.656 PCIE (0000:00:08.0) NSID 2 from core 2: 2686.13 10.49 5954.74 969.11 13877.17 00:10:08.656 PCIE (0000:00:08.0) NSID 3 from core 2: 2686.13 10.49 5955.76 1084.09 13946.19 00:10:08.656 ======================================================== 00:10:08.656 Total : 16116.80 62.96 5955.58 964.13 17078.94 00:10:08.656 00:10:08.656 00:04:23 -- nvme/nvme.sh@56 -- # wait 76135 00:10:10.555 Initializing NVMe Controllers 00:10:10.555 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:10.555 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:10.555 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:10.555 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:10.555 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:10.555 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:10.555 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:10.555 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:10.555 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:10.555 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:10.555 Initialization complete. Launching workers. 00:10:10.555 ======================================================== 00:10:10.555 Latency(us) 00:10:10.555 Device Information : IOPS MiB/s Average min max 00:10:10.555 PCIE (0000:00:09.0) NSID 1 from core 0: 9573.29 37.40 1670.94 772.62 8887.18 00:10:10.555 PCIE (0000:00:06.0) NSID 1 from core 0: 9573.29 37.40 1670.10 744.37 8915.53 00:10:10.555 PCIE (0000:00:07.0) NSID 1 from core 0: 9573.29 37.40 1670.91 754.47 9275.71 00:10:10.555 PCIE (0000:00:08.0) NSID 1 from core 0: 9573.29 37.40 1670.88 577.38 8950.56 00:10:10.555 PCIE (0000:00:08.0) NSID 2 from core 0: 9573.29 37.40 1670.86 519.71 9046.30 00:10:10.555 PCIE (0000:00:08.0) NSID 3 from core 0: 9573.29 37.40 1670.84 443.04 8830.62 00:10:10.555 ======================================================== 00:10:10.555 Total : 57439.75 224.37 1670.76 443.04 9275.71 00:10:10.556 00:10:10.556 00:04:25 -- nvme/nvme.sh@57 -- # wait 76136 00:10:10.556 00:04:25 -- nvme/nvme.sh@61 -- # pid0=76205 00:10:10.556 00:04:25 -- nvme/nvme.sh@63 -- # pid1=76206 00:10:10.556 00:04:25 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:10.556 00:04:25 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:10.556 00:04:25 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:13.835 Initializing NVMe Controllers 00:10:13.835 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:13.835 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:13.835 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:13.835 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:13.835 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:13.835 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:13.835 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:13.835 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:13.835 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:13.835 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:13.835 Initialization complete. Launching workers. 00:10:13.835 ======================================================== 00:10:13.836 Latency(us) 00:10:13.836 Device Information : IOPS MiB/s Average min max 00:10:13.836 PCIE (0000:00:09.0) NSID 1 from core 0: 7128.53 27.85 2244.10 792.77 6518.12 00:10:13.836 PCIE (0000:00:06.0) NSID 1 from core 0: 7128.53 27.85 2243.11 776.76 6705.21 00:10:13.836 PCIE (0000:00:07.0) NSID 1 from core 0: 7128.53 27.85 2244.08 795.39 7940.96 00:10:13.836 PCIE (0000:00:08.0) NSID 1 from core 0: 7128.53 27.85 2244.01 809.21 6852.53 00:10:13.836 PCIE (0000:00:08.0) NSID 2 from core 0: 7128.53 27.85 2243.97 745.44 6883.02 00:10:13.836 PCIE (0000:00:08.0) NSID 3 from core 0: 7128.53 27.85 2243.98 713.37 6360.72 00:10:13.836 ======================================================== 00:10:13.836 Total : 42771.15 167.07 2243.88 713.37 7940.96 00:10:13.836 00:10:13.836 Initializing NVMe Controllers 00:10:13.836 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:13.836 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:13.836 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:13.836 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:13.836 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:13.836 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:13.836 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:13.836 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:13.836 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:13.836 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:13.836 Initialization complete. Launching workers. 00:10:13.836 ======================================================== 00:10:13.836 Latency(us) 00:10:13.836 Device Information : IOPS MiB/s Average min max 00:10:13.836 PCIE (0000:00:09.0) NSID 1 from core 1: 7393.79 28.88 2163.51 779.10 7269.73 00:10:13.836 PCIE (0000:00:06.0) NSID 1 from core 1: 7393.79 28.88 2162.51 766.24 7131.95 00:10:13.836 PCIE (0000:00:07.0) NSID 1 from core 1: 7393.79 28.88 2163.41 769.67 6967.41 00:10:13.836 PCIE (0000:00:08.0) NSID 1 from core 1: 7393.79 28.88 2163.38 780.82 6651.75 00:10:13.836 PCIE (0000:00:08.0) NSID 2 from core 1: 7393.79 28.88 2163.34 788.16 7205.05 00:10:13.836 PCIE (0000:00:08.0) NSID 3 from core 1: 7393.79 28.88 2163.26 783.83 7126.08 00:10:13.836 ======================================================== 00:10:13.836 Total : 44362.75 173.29 2163.24 766.24 7269.73 00:10:13.836 00:10:16.367 Initializing NVMe Controllers 00:10:16.367 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:16.367 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:16.367 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:16.367 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:16.367 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:16.367 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:16.367 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:16.367 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:16.367 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:16.367 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:16.367 Initialization complete. Launching workers. 00:10:16.367 ======================================================== 00:10:16.367 Latency(us) 00:10:16.367 Device Information : IOPS MiB/s Average min max 00:10:16.367 PCIE (0000:00:09.0) NSID 1 from core 2: 4402.11 17.20 3634.08 781.24 16277.23 00:10:16.367 PCIE (0000:00:06.0) NSID 1 from core 2: 4402.11 17.20 3632.58 796.68 15972.34 00:10:16.367 PCIE (0000:00:07.0) NSID 1 from core 2: 4405.31 17.21 3631.58 755.38 16198.60 00:10:16.367 PCIE (0000:00:08.0) NSID 1 from core 2: 4405.31 17.21 3631.52 651.84 12630.37 00:10:16.367 PCIE (0000:00:08.0) NSID 2 from core 2: 4405.31 17.21 3631.30 555.14 12523.65 00:10:16.367 PCIE (0000:00:08.0) NSID 3 from core 2: 4405.31 17.21 3631.44 460.68 12679.80 00:10:16.367 ======================================================== 00:10:16.367 Total : 26425.44 103.22 3632.08 460.68 16277.23 00:10:16.367 00:10:16.367 00:04:30 -- nvme/nvme.sh@65 -- # wait 76205 00:10:16.367 00:04:30 -- nvme/nvme.sh@66 -- # wait 76206 00:10:16.367 00:10:16.367 real 0m10.608s 00:10:16.367 user 0m18.317s 00:10:16.367 sys 0m0.555s 00:10:16.367 00:04:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.367 00:04:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.367 ************************************ 00:10:16.367 END TEST nvme_multi_secondary 00:10:16.367 ************************************ 00:10:16.367 00:04:30 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:16.367 00:04:30 -- nvme/nvme.sh@102 -- # kill_stub 00:10:16.367 00:04:30 -- common/autotest_common.sh@1075 -- # [[ -e /proc/75164 ]] 00:10:16.367 00:04:30 -- common/autotest_common.sh@1076 -- # kill 75164 00:10:16.367 00:04:30 -- common/autotest_common.sh@1077 -- # wait 75164 00:10:16.938 [2024-11-28 00:04:31.331970] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:16.938 [2024-11-28 00:04:31.332094] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:16.938 [2024-11-28 00:04:31.332127] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:16.938 [2024-11-28 00:04:31.332159] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:17.510 [2024-11-28 00:04:31.838578] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:17.510 [2024-11-28 00:04:31.838694] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:17.510 [2024-11-28 00:04:31.838726] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:17.510 [2024-11-28 00:04:31.838757] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:18.454 [2024-11-28 00:04:32.839756] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:18.454 [2024-11-28 00:04:32.839862] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:18.454 [2024-11-28 00:04:32.839892] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:18.454 [2024-11-28 00:04:32.839927] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:19.399 [2024-11-28 00:04:33.854598] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:19.399 [2024-11-28 00:04:33.854694] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:19.399 [2024-11-28 00:04:33.854719] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:19.399 [2024-11-28 00:04:33.854742] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76078) is not found. Dropping the request. 00:10:19.399 00:04:33 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:19.399 00:04:33 -- common/autotest_common.sh@1083 -- # echo 2 00:10:19.399 00:04:33 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:19.399 00:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:19.399 00:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.399 00:04:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.399 ************************************ 00:10:19.399 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:19.399 ************************************ 00:10:19.399 00:04:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:19.659 * Looking for test storage... 00:10:19.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:19.659 00:04:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:19.659 00:04:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:19.659 00:04:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:19.659 00:04:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:19.659 00:04:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:19.659 00:04:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:19.659 00:04:34 -- scripts/common.sh@335 -- # IFS=.-: 00:10:19.659 00:04:34 -- scripts/common.sh@335 -- # read -ra ver1 00:10:19.659 00:04:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.659 00:04:34 -- scripts/common.sh@336 -- # read -ra ver2 00:10:19.659 00:04:34 -- scripts/common.sh@337 -- # local 'op=<' 00:10:19.659 00:04:34 -- scripts/common.sh@339 -- # ver1_l=2 00:10:19.659 00:04:34 -- scripts/common.sh@340 -- # ver2_l=1 00:10:19.659 00:04:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:19.659 00:04:34 -- scripts/common.sh@343 -- # case "$op" in 00:10:19.659 00:04:34 -- scripts/common.sh@344 -- # : 1 00:10:19.659 00:04:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:19.659 00:04:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.659 00:04:34 -- scripts/common.sh@364 -- # decimal 1 00:10:19.659 00:04:34 -- scripts/common.sh@352 -- # local d=1 00:10:19.659 00:04:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.659 00:04:34 -- scripts/common.sh@354 -- # echo 1 00:10:19.659 00:04:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:19.659 00:04:34 -- scripts/common.sh@365 -- # decimal 2 00:10:19.659 00:04:34 -- scripts/common.sh@352 -- # local d=2 00:10:19.659 00:04:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.659 00:04:34 -- scripts/common.sh@354 -- # echo 2 00:10:19.659 00:04:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:19.659 00:04:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:19.659 00:04:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:19.659 00:04:34 -- scripts/common.sh@367 -- # return 0 00:10:19.659 00:04:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.659 --rc genhtml_branch_coverage=1 00:10:19.659 --rc genhtml_function_coverage=1 00:10:19.659 --rc genhtml_legend=1 00:10:19.659 --rc geninfo_all_blocks=1 00:10:19.659 --rc geninfo_unexecuted_blocks=1 00:10:19.659 00:10:19.659 ' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.659 --rc genhtml_branch_coverage=1 00:10:19.659 --rc genhtml_function_coverage=1 00:10:19.659 --rc genhtml_legend=1 00:10:19.659 --rc geninfo_all_blocks=1 00:10:19.659 --rc geninfo_unexecuted_blocks=1 00:10:19.659 00:10:19.659 ' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.659 --rc genhtml_branch_coverage=1 00:10:19.659 --rc genhtml_function_coverage=1 00:10:19.659 --rc genhtml_legend=1 00:10:19.659 --rc geninfo_all_blocks=1 00:10:19.659 --rc geninfo_unexecuted_blocks=1 00:10:19.659 00:10:19.659 ' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.659 --rc genhtml_branch_coverage=1 00:10:19.659 --rc genhtml_function_coverage=1 00:10:19.659 --rc genhtml_legend=1 00:10:19.659 --rc geninfo_all_blocks=1 00:10:19.659 --rc geninfo_unexecuted_blocks=1 00:10:19.659 00:10:19.659 ' 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:19.659 00:04:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:19.659 00:04:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:19.659 00:04:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:19.659 00:04:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:19.659 00:04:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:19.659 00:04:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:19.659 00:04:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:19.659 00:04:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:19.659 00:04:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:19.659 00:04:34 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:19.659 00:04:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:19.659 00:04:34 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76400 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76400 00:10:19.659 00:04:34 -- common/autotest_common.sh@829 -- # '[' -z 76400 ']' 00:10:19.659 00:04:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.659 00:04:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:19.659 00:04:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.659 00:04:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.659 00:04:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.659 00:04:34 -- common/autotest_common.sh@10 -- # set +x 00:10:19.659 [2024-11-28 00:04:34.209763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:19.659 [2024-11-28 00:04:34.209869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76400 ] 00:10:19.920 [2024-11-28 00:04:34.365233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.920 [2024-11-28 00:04:34.396984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.920 [2024-11-28 00:04:34.397472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.920 [2024-11-28 00:04:34.397645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.920 [2024-11-28 00:04:34.397922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.920 [2024-11-28 00:04:34.398016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.491 00:04:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.491 00:04:35 -- common/autotest_common.sh@862 -- # return 0 00:10:20.491 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:20.491 00:04:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.491 00:04:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.491 nvme0n1 00:10:20.491 00:04:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.491 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:20.491 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_KReWx.txt 00:10:20.491 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:20.491 00:04:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.491 00:04:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.750 true 00:10:20.750 00:04:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732752275 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76419 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:20.750 00:04:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:22.665 00:04:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.665 00:04:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 [2024-11-28 00:04:37.105407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:22.665 [2024-11-28 00:04:37.105910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:22.665 [2024-11-28 00:04:37.105950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:22.665 [2024-11-28 00:04:37.105961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.665 [2024-11-28 00:04:37.107549] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:22.665 00:04:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.665 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76419 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76419 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76419 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:22.665 00:04:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.665 00:04:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.665 00:04:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_KReWx.txt 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_KReWx.txt 00:10:22.665 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76400 00:10:22.665 00:04:37 -- common/autotest_common.sh@936 -- # '[' -z 76400 ']' 00:10:22.665 00:04:37 -- common/autotest_common.sh@940 -- # kill -0 76400 00:10:22.665 00:04:37 -- common/autotest_common.sh@941 -- # uname 00:10:22.665 00:04:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.665 00:04:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76400 00:10:22.665 killing process with pid 76400 00:10:22.665 00:04:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:22.665 00:04:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:22.665 00:04:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76400' 00:10:22.665 00:04:37 -- common/autotest_common.sh@955 -- # kill 76400 00:10:22.665 00:04:37 -- common/autotest_common.sh@960 -- # wait 76400 00:10:22.927 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:22.927 00:04:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:22.927 00:10:22.927 real 0m3.500s 00:10:22.927 user 0m12.446s 00:10:22.927 sys 0m0.468s 00:10:22.927 00:04:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.927 00:04:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.927 ************************************ 00:10:22.927 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:22.927 ************************************ 00:10:22.927 00:04:37 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:22.927 00:04:37 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:22.927 00:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.927 00:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.927 00:04:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.927 ************************************ 00:10:22.927 START TEST nvme_fio 00:10:22.927 ************************************ 00:10:22.927 00:04:37 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:22.927 00:04:37 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:22.927 00:04:37 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:22.927 00:04:37 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:22.927 00:04:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:22.927 00:04:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:22.927 00:04:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:22.927 00:04:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:22.927 00:04:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:23.293 00:04:37 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:23.293 00:04:37 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:23.293 00:04:37 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:23.293 00:04:37 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:23.293 00:04:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:23.293 00:04:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:23.293 00:04:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:23.293 00:04:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:23.293 00:04:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:23.553 00:04:37 -- nvme/nvme.sh@41 -- # bs=4096 00:10:23.553 00:04:37 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:23.553 00:04:37 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:23.553 00:04:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:23.553 00:04:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:23.553 00:04:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:23.553 00:04:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.553 00:04:37 -- common/autotest_common.sh@1330 -- # shift 00:10:23.553 00:04:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:23.553 00:04:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:23.553 00:04:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:23.553 00:04:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.553 00:04:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:23.553 00:04:37 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:23.553 00:04:37 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:23.553 00:04:37 -- common/autotest_common.sh@1336 -- # break 00:10:23.553 00:04:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:23.553 00:04:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:23.553 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:23.553 fio-3.35 00:10:23.553 Starting 1 thread 00:10:30.141 00:10:30.141 test: (groupid=0, jobs=1): err= 0: pid=76546: Thu Nov 28 00:04:44 2024 00:10:30.141 read: IOPS=23.6k, BW=92.1MiB/s (96.5MB/s)(184MiB/2001msec) 00:10:30.141 slat (nsec): min=3326, max=90437, avg=4873.84, stdev=2007.84 00:10:30.141 clat (usec): min=336, max=8131, avg=2713.36, stdev=786.04 00:10:30.141 lat (usec): min=340, max=8171, avg=2718.24, stdev=787.31 00:10:30.141 clat percentiles (usec): 00:10:30.141 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:30.141 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:30.141 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 3097], 95.00th=[ 4490], 00:10:30.141 | 99.00th=[ 6259], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7898], 00:10:30.141 | 99.99th=[ 8029] 00:10:30.141 bw ( KiB/s): min=92984, max=96856, per=100.00%, avg=94752.00, stdev=1957.75, samples=3 00:10:30.141 iops : min=23246, max=24214, avg=23688.00, stdev=489.44, samples=3 00:10:30.141 write: IOPS=23.4k, BW=91.4MiB/s (95.9MB/s)(183MiB/2001msec); 0 zone resets 00:10:30.141 slat (nsec): min=3431, max=67397, avg=5128.35, stdev=2016.59 00:10:30.141 clat (usec): min=522, max=8179, avg=2719.75, stdev=785.65 00:10:30.141 lat (usec): min=534, max=8192, avg=2724.88, stdev=786.94 00:10:30.141 clat percentiles (usec): 00:10:30.141 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:30.141 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2606], 00:10:30.141 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3097], 95.00th=[ 4490], 00:10:30.141 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 7898], 00:10:30.141 | 99.99th=[ 8094] 00:10:30.141 bw ( KiB/s): min=93864, max=96208, per=100.00%, avg=94770.67, stdev=1258.88, samples=3 00:10:30.141 iops : min=23466, max=24052, avg=23692.67, stdev=314.72, samples=3 00:10:30.141 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:10:30.141 lat (msec) : 2=1.06%, 4=92.93%, 10=5.97% 00:10:30.141 cpu : usr=99.35%, sys=0.00%, ctx=5, majf=0, minf=627 00:10:30.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:30.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.141 issued rwts: total=47161,46835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.141 00:10:30.141 Run status group 0 (all jobs): 00:10:30.141 READ: bw=92.1MiB/s (96.5MB/s), 92.1MiB/s-92.1MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:10:30.141 WRITE: bw=91.4MiB/s (95.9MB/s), 91.4MiB/s-91.4MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:30.141 ----------------------------------------------------- 00:10:30.141 Suppressions used: 00:10:30.141 count bytes template 00:10:30.141 1 32 /usr/src/fio/parse.c 00:10:30.141 1 8 libtcmalloc_minimal.so 00:10:30.141 ----------------------------------------------------- 00:10:30.141 00:10:30.141 00:04:44 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:30.141 00:04:44 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:30.141 00:04:44 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:30.141 00:04:44 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:30.141 00:04:44 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:30.141 00:04:44 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:30.403 00:04:44 -- nvme/nvme.sh@41 -- # bs=4096 00:10:30.403 00:04:44 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.403 00:04:44 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.403 00:04:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:30.403 00:04:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:30.403 00:04:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:30.403 00:04:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.403 00:04:44 -- common/autotest_common.sh@1330 -- # shift 00:10:30.403 00:04:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:30.403 00:04:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:30.403 00:04:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.403 00:04:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:30.403 00:04:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:30.403 00:04:44 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:30.403 00:04:44 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:30.403 00:04:44 -- common/autotest_common.sh@1336 -- # break 00:10:30.403 00:04:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:30.403 00:04:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:30.672 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:30.672 fio-3.35 00:10:30.672 Starting 1 thread 00:10:38.831 00:10:38.831 test: (groupid=0, jobs=1): err= 0: pid=76645: Thu Nov 28 00:04:52 2024 00:10:38.831 read: IOPS=23.6k, BW=92.1MiB/s (96.5MB/s)(184MiB/2001msec) 00:10:38.831 slat (nsec): min=4143, max=81057, avg=4892.13, stdev=2191.46 00:10:38.831 clat (usec): min=232, max=11382, avg=2712.47, stdev=818.09 00:10:38.831 lat (usec): min=237, max=11442, avg=2717.36, stdev=819.48 00:10:38.831 clat percentiles (usec): 00:10:38.831 | 1.00th=[ 2040], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:38.831 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:38.831 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 3064], 95.00th=[ 4490], 00:10:38.831 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7832], 99.95th=[ 8225], 00:10:38.831 | 99.99th=[11207] 00:10:38.831 bw ( KiB/s): min=91712, max=97752, per=100.00%, avg=95536.00, stdev=3325.60, samples=3 00:10:38.831 iops : min=22928, max=24438, avg=23884.00, stdev=831.40, samples=3 00:10:38.831 write: IOPS=23.4k, BW=91.4MiB/s (95.9MB/s)(183MiB/2001msec); 0 zone resets 00:10:38.831 slat (nsec): min=4235, max=48331, avg=5143.03, stdev=2045.54 00:10:38.831 clat (usec): min=316, max=11280, avg=2718.95, stdev=819.91 00:10:38.831 lat (usec): min=320, max=11295, avg=2724.09, stdev=821.24 00:10:38.831 clat percentiles (usec): 00:10:38.831 | 1.00th=[ 2040], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2343], 00:10:38.831 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:38.831 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 3032], 95.00th=[ 4490], 00:10:38.831 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7832], 99.95th=[ 8717], 00:10:38.831 | 99.99th=[11076] 00:10:38.831 bw ( KiB/s): min=92568, max=97408, per=100.00%, avg=95560.00, stdev=2614.95, samples=3 00:10:38.831 iops : min=23142, max=24352, avg=23890.00, stdev=653.74, samples=3 00:10:38.831 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:38.831 lat (msec) : 2=0.74%, 4=93.01%, 10=6.19%, 20=0.03% 00:10:38.831 cpu : usr=99.40%, sys=0.00%, ctx=6, majf=0, minf=627 00:10:38.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:38.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.831 issued rwts: total=47157,46831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.831 00:10:38.831 Run status group 0 (all jobs): 00:10:38.831 READ: bw=92.1MiB/s (96.5MB/s), 92.1MiB/s-92.1MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:10:38.831 WRITE: bw=91.4MiB/s (95.9MB/s), 91.4MiB/s-91.4MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:38.831 ----------------------------------------------------- 00:10:38.831 Suppressions used: 00:10:38.831 count bytes template 00:10:38.831 1 32 /usr/src/fio/parse.c 00:10:38.831 1 8 libtcmalloc_minimal.so 00:10:38.831 ----------------------------------------------------- 00:10:38.831 00:10:38.831 00:04:52 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.831 00:04:52 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:38.832 00:04:52 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:38.832 00:04:52 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:38.832 00:04:52 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:38.832 00:04:52 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:38.832 00:04:52 -- nvme/nvme.sh@41 -- # bs=4096 00:10:38.832 00:04:52 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:38.832 00:04:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:38.832 00:04:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:38.832 00:04:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:38.832 00:04:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:38.832 00:04:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.832 00:04:52 -- common/autotest_common.sh@1330 -- # shift 00:10:38.832 00:04:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:38.832 00:04:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:38.832 00:04:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:38.832 00:04:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.832 00:04:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:38.832 00:04:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:38.832 00:04:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:38.832 00:04:52 -- common/autotest_common.sh@1336 -- # break 00:10:38.832 00:04:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:38.832 00:04:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:38.832 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:38.832 fio-3.35 00:10:38.832 Starting 1 thread 00:10:45.483 00:10:45.483 test: (groupid=0, jobs=1): err= 0: pid=76754: Thu Nov 28 00:04:59 2024 00:10:45.483 read: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(182MiB/2001msec) 00:10:45.483 slat (nsec): min=4153, max=56950, avg=5011.41, stdev=2283.39 00:10:45.483 clat (usec): min=201, max=10651, avg=2745.73, stdev=871.55 00:10:45.483 lat (usec): min=206, max=10657, avg=2750.74, stdev=873.04 00:10:45.483 clat percentiles (usec): 00:10:45.483 | 1.00th=[ 2040], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:45.483 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:45.483 | 70.00th=[ 2638], 80.00th=[ 2802], 90.00th=[ 3392], 95.00th=[ 4948], 00:10:45.483 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7701], 99.95th=[ 8848], 00:10:45.483 | 99.99th=[10421] 00:10:45.483 bw ( KiB/s): min=92840, max=93728, per=100.00%, avg=93365.33, stdev=465.81, samples=3 00:10:45.483 iops : min=23210, max=23432, avg=23341.33, stdev=116.45, samples=3 00:10:45.483 write: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(181MiB/2001msec); 0 zone resets 00:10:45.483 slat (nsec): min=4226, max=67359, avg=5270.25, stdev=2340.08 00:10:45.483 clat (usec): min=259, max=10629, avg=2755.48, stdev=877.00 00:10:45.483 lat (usec): min=263, max=10635, avg=2760.75, stdev=878.53 00:10:45.483 clat percentiles (usec): 00:10:45.483 | 1.00th=[ 2057], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:45.483 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:45.483 | 70.00th=[ 2671], 80.00th=[ 2802], 90.00th=[ 3392], 95.00th=[ 5014], 00:10:45.483 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[ 9765], 00:10:45.483 | 99.99th=[10552] 00:10:45.483 bw ( KiB/s): min=92392, max=94384, per=100.00%, avg=93432.00, stdev=998.91, samples=3 00:10:45.483 iops : min=23098, max=23596, avg=23358.00, stdev=249.73, samples=3 00:10:45.483 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:45.483 lat (msec) : 2=0.59%, 4=92.06%, 10=7.28%, 20=0.03% 00:10:45.483 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=628 00:10:45.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:45.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.483 issued rwts: total=46544,46246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.483 00:10:45.483 Run status group 0 (all jobs): 00:10:45.483 READ: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=182MiB (191MB), run=2001-2001msec 00:10:45.483 WRITE: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=181MiB (189MB), run=2001-2001msec 00:10:45.483 ----------------------------------------------------- 00:10:45.483 Suppressions used: 00:10:45.483 count bytes template 00:10:45.483 1 32 /usr/src/fio/parse.c 00:10:45.483 1 8 libtcmalloc_minimal.so 00:10:45.483 ----------------------------------------------------- 00:10:45.483 00:10:45.483 00:05:00 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:45.483 00:05:00 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:45.483 00:05:00 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:45.483 00:05:00 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:45.745 00:05:00 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:45.745 00:05:00 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:46.007 00:05:00 -- nvme/nvme.sh@41 -- # bs=4096 00:10:46.007 00:05:00 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:46.007 00:05:00 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:46.007 00:05:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:46.007 00:05:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:46.007 00:05:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:46.007 00:05:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.007 00:05:00 -- common/autotest_common.sh@1330 -- # shift 00:10:46.007 00:05:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:46.007 00:05:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:46.007 00:05:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:46.007 00:05:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:46.007 00:05:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:46.007 00:05:00 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:46.007 00:05:00 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:46.007 00:05:00 -- common/autotest_common.sh@1336 -- # break 00:10:46.007 00:05:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:46.007 00:05:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:46.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:46.007 fio-3.35 00:10:46.007 Starting 1 thread 00:10:52.608 00:10:52.608 test: (groupid=0, jobs=1): err= 0: pid=76836: Thu Nov 28 00:05:06 2024 00:10:52.608 read: IOPS=24.6k, BW=95.9MiB/s (101MB/s)(192MiB/2001msec) 00:10:52.608 slat (nsec): min=3308, max=78718, avg=4785.25, stdev=1856.14 00:10:52.608 clat (usec): min=195, max=11803, avg=2599.06, stdev=703.52 00:10:52.608 lat (usec): min=199, max=11868, avg=2603.84, stdev=704.66 00:10:52.608 clat percentiles (usec): 00:10:52.608 | 1.00th=[ 1975], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:10:52.608 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2474], 00:10:52.608 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 2999], 95.00th=[ 3818], 00:10:52.608 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7111], 99.95th=[ 8586], 00:10:52.608 | 99.99th=[11469] 00:10:52.608 bw ( KiB/s): min=94032, max=98944, per=98.99%, avg=97248.00, stdev=2786.53, samples=3 00:10:52.608 iops : min=23508, max=24736, avg=24312.00, stdev=696.63, samples=3 00:10:52.608 write: IOPS=24.4k, BW=95.3MiB/s (99.9MB/s)(191MiB/2001msec); 0 zone resets 00:10:52.608 slat (nsec): min=3451, max=66567, avg=5066.10, stdev=1865.35 00:10:52.608 clat (usec): min=230, max=11618, avg=2611.48, stdev=710.41 00:10:52.608 lat (usec): min=234, max=11633, avg=2616.55, stdev=711.56 00:10:52.608 clat percentiles (usec): 00:10:52.608 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2278], 00:10:52.608 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:10:52.608 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 3032], 95.00th=[ 3851], 00:10:52.608 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 9110], 00:10:52.608 | 99.99th=[11207] 00:10:52.608 bw ( KiB/s): min=93848, max=100008, per=99.72%, avg=97328.00, stdev=3156.96, samples=3 00:10:52.608 iops : min=23462, max=25002, avg=24332.00, stdev=789.24, samples=3 00:10:52.608 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:52.608 lat (msec) : 2=1.17%, 4=94.26%, 10=4.48%, 20=0.03% 00:10:52.608 cpu : usr=99.25%, sys=0.10%, ctx=4, majf=0, minf=625 00:10:52.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:52.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.608 issued rwts: total=49143,48825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.608 00:10:52.608 Run status group 0 (all jobs): 00:10:52.608 READ: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=192MiB (201MB), run=2001-2001msec 00:10:52.608 WRITE: bw=95.3MiB/s (99.9MB/s), 95.3MiB/s-95.3MiB/s (99.9MB/s-99.9MB/s), io=191MiB (200MB), run=2001-2001msec 00:10:52.608 ----------------------------------------------------- 00:10:52.608 Suppressions used: 00:10:52.608 count bytes template 00:10:52.608 1 32 /usr/src/fio/parse.c 00:10:52.608 1 8 libtcmalloc_minimal.so 00:10:52.608 ----------------------------------------------------- 00:10:52.608 00:10:52.608 00:05:06 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:52.608 00:05:06 -- nvme/nvme.sh@46 -- # true 00:10:52.608 00:10:52.608 real 0m28.739s 00:10:52.608 user 0m18.511s 00:10:52.608 sys 0m18.179s 00:10:52.608 00:05:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:52.608 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.608 ************************************ 00:10:52.608 END TEST nvme_fio 00:10:52.608 ************************************ 00:10:52.608 00:10:52.608 real 1m39.861s 00:10:52.608 user 3m33.220s 00:10:52.608 sys 0m28.693s 00:10:52.608 00:05:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:52.608 ************************************ 00:10:52.608 END TEST nvme 00:10:52.608 ************************************ 00:10:52.608 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.608 00:05:06 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:52.608 00:05:06 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:52.608 00:05:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.608 00:05:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.608 ************************************ 00:10:52.608 START TEST nvme_scc 00:10:52.608 ************************************ 00:10:52.608 00:05:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:52.608 * Looking for test storage... 00:10:52.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:52.608 00:05:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:52.608 00:05:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:52.608 00:05:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:52.608 00:05:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:52.608 00:05:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:52.608 00:05:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:52.608 00:05:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:52.608 00:05:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:52.608 00:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.608 00:05:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:52.608 00:05:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:52.608 00:05:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:52.608 00:05:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:52.608 00:05:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:52.608 00:05:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:52.608 00:05:06 -- scripts/common.sh@344 -- # : 1 00:10:52.608 00:05:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:52.608 00:05:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.608 00:05:06 -- scripts/common.sh@364 -- # decimal 1 00:10:52.608 00:05:06 -- scripts/common.sh@352 -- # local d=1 00:10:52.608 00:05:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.608 00:05:06 -- scripts/common.sh@354 -- # echo 1 00:10:52.608 00:05:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:52.608 00:05:06 -- scripts/common.sh@365 -- # decimal 2 00:10:52.608 00:05:06 -- scripts/common.sh@352 -- # local d=2 00:10:52.608 00:05:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.608 00:05:06 -- scripts/common.sh@354 -- # echo 2 00:10:52.608 00:05:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:52.608 00:05:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:52.608 00:05:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:52.608 00:05:06 -- scripts/common.sh@367 -- # return 0 00:10:52.608 00:05:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:52.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.608 --rc genhtml_branch_coverage=1 00:10:52.608 --rc genhtml_function_coverage=1 00:10:52.608 --rc genhtml_legend=1 00:10:52.608 --rc geninfo_all_blocks=1 00:10:52.608 --rc geninfo_unexecuted_blocks=1 00:10:52.608 00:10:52.608 ' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:52.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.608 --rc genhtml_branch_coverage=1 00:10:52.608 --rc genhtml_function_coverage=1 00:10:52.608 --rc genhtml_legend=1 00:10:52.608 --rc geninfo_all_blocks=1 00:10:52.608 --rc geninfo_unexecuted_blocks=1 00:10:52.608 00:10:52.608 ' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:52.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.608 --rc genhtml_branch_coverage=1 00:10:52.608 --rc genhtml_function_coverage=1 00:10:52.608 --rc genhtml_legend=1 00:10:52.608 --rc geninfo_all_blocks=1 00:10:52.608 --rc geninfo_unexecuted_blocks=1 00:10:52.608 00:10:52.608 ' 00:10:52.608 00:05:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:52.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.608 --rc genhtml_branch_coverage=1 00:10:52.608 --rc genhtml_function_coverage=1 00:10:52.608 --rc genhtml_legend=1 00:10:52.608 --rc geninfo_all_blocks=1 00:10:52.608 --rc geninfo_unexecuted_blocks=1 00:10:52.608 00:10:52.609 ' 00:10:52.609 00:05:06 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:52.609 00:05:06 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:52.609 00:05:06 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:52.609 00:05:06 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:52.609 00:05:06 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.609 00:05:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.609 00:05:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.609 00:05:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.609 00:05:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.609 00:05:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.609 00:05:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.609 00:05:06 -- paths/export.sh@5 -- # export PATH 00:10:52.609 00:05:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.609 00:05:06 -- nvme/functions.sh@10 -- # ctrls=() 00:10:52.609 00:05:06 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:52.609 00:05:06 -- nvme/functions.sh@11 -- # nvmes=() 00:10:52.609 00:05:06 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:52.609 00:05:06 -- nvme/functions.sh@12 -- # bdfs=() 00:10:52.609 00:05:06 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:52.609 00:05:06 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:52.609 00:05:06 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:52.609 00:05:06 -- nvme/functions.sh@14 -- # nvme_name= 00:10:52.609 00:05:06 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.609 00:05:06 -- nvme/nvme_scc.sh@12 -- # uname 00:10:52.609 00:05:06 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:52.609 00:05:06 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:52.609 00:05:06 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:52.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.609 Waiting for block devices as requested 00:10:52.609 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.609 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.609 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.609 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.891 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:57.891 00:05:12 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:57.891 00:05:12 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:57.891 00:05:12 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.891 00:05:12 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:57.891 00:05:12 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:57.891 00:05:12 -- scripts/common.sh@15 -- # local i 00:10:57.891 00:05:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:57.891 00:05:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.891 00:05:12 -- scripts/common.sh@24 -- # return 0 00:10:57.891 00:05:12 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:57.891 00:05:12 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:57.891 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.891 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:57.891 00:05:12 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.891 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.891 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:57.892 00:05:12 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.892 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.892 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:57.893 00:05:12 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.893 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.893 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:57.894 00:05:12 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:57.894 00:05:12 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:57.894 00:05:12 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:57.894 00:05:12 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:57.894 00:05:12 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.894 00:05:12 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:57.894 00:05:12 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:57.894 00:05:12 -- scripts/common.sh@15 -- # local i 00:10:57.894 00:05:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:57.894 00:05:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.894 00:05:12 -- scripts/common.sh@24 -- # return 0 00:10:57.894 00:05:12 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:57.894 00:05:12 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:57.894 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.894 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.894 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:57.894 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.894 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.895 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.895 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.895 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.896 00:05:12 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:57.896 00:05:12 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:57.896 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:57.897 00:05:12 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.897 00:05:12 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:57.897 00:05:12 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:57.897 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.897 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.897 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.897 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.897 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:57.898 00:05:12 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.898 00:05:12 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:57.898 00:05:12 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:57.898 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.898 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:57.898 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.898 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.898 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.899 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.899 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.899 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:57.900 00:05:12 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.900 00:05:12 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:57.900 00:05:12 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:57.900 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.900 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:57.900 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.900 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.900 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:57.901 00:05:12 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:57.901 00:05:12 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:57.901 00:05:12 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:57.901 00:05:12 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:57.901 00:05:12 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.901 00:05:12 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:57.901 00:05:12 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:57.901 00:05:12 -- scripts/common.sh@15 -- # local i 00:10:57.901 00:05:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:57.901 00:05:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.901 00:05:12 -- scripts/common.sh@24 -- # return 0 00:10:57.901 00:05:12 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:57.901 00:05:12 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:57.901 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.901 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:57.901 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.901 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.901 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:57.902 00:05:12 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.902 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.902 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.903 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:57.903 00:05:12 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:57.903 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:57.904 00:05:12 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.904 00:05:12 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:57.904 00:05:12 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:57.904 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.904 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.904 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:57.904 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.904 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:57.905 00:05:12 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.905 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:57.905 00:05:12 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:57.905 00:05:12 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:57.905 00:05:12 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:57.905 00:05:12 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:57.905 00:05:12 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.905 00:05:12 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:57.905 00:05:12 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:57.905 00:05:12 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:57.905 00:05:12 -- scripts/common.sh@15 -- # local i 00:10:57.905 00:05:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:57.905 00:05:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.905 00:05:12 -- scripts/common.sh@24 -- # return 0 00:10:57.905 00:05:12 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:57.905 00:05:12 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:57.905 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:57.905 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.906 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.906 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.906 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:57.906 00:05:12 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.907 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:57.907 00:05:12 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.907 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:57.908 00:05:12 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.908 00:05:12 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:57.908 00:05:12 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:57.908 00:05:12 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@18 -- # shift 00:10:57.908 00:05:12 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.908 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.908 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:57.908 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.909 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.909 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.909 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.910 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.910 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.910 00:05:12 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.910 00:05:12 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # IFS=: 00:10:57.910 00:05:12 -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.910 00:05:12 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:57.910 00:05:12 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:57.910 00:05:12 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:57.910 00:05:12 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:57.910 00:05:12 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:57.910 00:05:12 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:57.910 00:05:12 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:57.910 00:05:12 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:57.910 00:05:12 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:57.910 00:05:12 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:57.910 00:05:12 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # echo nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:57.910 00:05:12 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:57.910 00:05:12 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:57.910 00:05:12 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:57.910 00:05:12 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # echo nvme0 00:10:57.910 00:05:12 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # echo nvme3 00:10:57.910 00:05:12 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:57.910 00:05:12 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:57.910 00:05:12 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:57.910 00:05:12 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:57.910 00:05:12 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:57.910 00:05:12 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:57.910 00:05:12 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:57.910 00:05:12 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@197 -- # echo nvme2 00:10:57.910 00:05:12 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:57.910 00:05:12 -- nvme/functions.sh@206 -- # echo nvme1 00:10:57.910 00:05:12 -- nvme/functions.sh@207 -- # return 0 00:10:57.910 00:05:12 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:57.910 00:05:12 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:57.910 00:05:12 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:58.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.846 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.846 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.846 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.846 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.846 00:05:13 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:58.846 00:05:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:58.846 00:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.846 00:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:58.846 ************************************ 00:10:58.846 START TEST nvme_simple_copy 00:10:58.846 ************************************ 00:10:58.846 00:05:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:59.106 Initializing NVMe Controllers 00:10:59.106 Attaching to 0000:00:08.0 00:10:59.106 Controller supports SCC. Attached to 0000:00:08.0 00:10:59.106 Namespace ID: 1 size: 4GB 00:10:59.106 Initialization complete. 00:10:59.106 00:10:59.106 Controller QEMU NVMe Ctrl (12342 ) 00:10:59.106 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:59.106 Namespace Block Size:4096 00:10:59.106 Writing LBAs 0 to 63 with Random Data 00:10:59.106 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:59.106 LBAs matching Written Data: 64 00:10:59.106 00:10:59.106 real 0m0.232s 00:10:59.106 user 0m0.077s 00:10:59.106 sys 0m0.053s 00:10:59.106 00:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.106 00:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:59.106 ************************************ 00:10:59.106 END TEST nvme_simple_copy 00:10:59.106 ************************************ 00:10:59.106 00:10:59.106 real 0m7.403s 00:10:59.106 user 0m1.051s 00:10:59.106 sys 0m1.291s 00:10:59.106 00:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.106 00:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:59.106 ************************************ 00:10:59.106 END TEST nvme_scc 00:10:59.106 ************************************ 00:10:59.365 00:05:13 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:59.365 00:05:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:59.365 00:05:13 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:59.365 00:05:13 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:59.365 00:05:13 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:59.365 00:05:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.365 00:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:59.365 ************************************ 00:10:59.365 START TEST nvme_fdp 00:10:59.365 ************************************ 00:10:59.365 00:05:13 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:59.365 * Looking for test storage... 00:10:59.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:59.365 00:05:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:59.365 00:05:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:59.365 00:05:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:59.365 00:05:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:59.365 00:05:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:59.365 00:05:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:59.365 00:05:13 -- scripts/common.sh@335 -- # IFS=.-: 00:10:59.365 00:05:13 -- scripts/common.sh@335 -- # read -ra ver1 00:10:59.365 00:05:13 -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.365 00:05:13 -- scripts/common.sh@336 -- # read -ra ver2 00:10:59.365 00:05:13 -- scripts/common.sh@337 -- # local 'op=<' 00:10:59.365 00:05:13 -- scripts/common.sh@339 -- # ver1_l=2 00:10:59.365 00:05:13 -- scripts/common.sh@340 -- # ver2_l=1 00:10:59.365 00:05:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:59.365 00:05:13 -- scripts/common.sh@343 -- # case "$op" in 00:10:59.365 00:05:13 -- scripts/common.sh@344 -- # : 1 00:10:59.365 00:05:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:59.365 00:05:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.365 00:05:13 -- scripts/common.sh@364 -- # decimal 1 00:10:59.365 00:05:13 -- scripts/common.sh@352 -- # local d=1 00:10:59.365 00:05:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.365 00:05:13 -- scripts/common.sh@354 -- # echo 1 00:10:59.365 00:05:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:59.365 00:05:13 -- scripts/common.sh@365 -- # decimal 2 00:10:59.365 00:05:13 -- scripts/common.sh@352 -- # local d=2 00:10:59.365 00:05:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.365 00:05:13 -- scripts/common.sh@354 -- # echo 2 00:10:59.365 00:05:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:59.365 00:05:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:59.365 00:05:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:59.365 00:05:13 -- scripts/common.sh@367 -- # return 0 00:10:59.365 00:05:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.365 --rc genhtml_branch_coverage=1 00:10:59.365 --rc genhtml_function_coverage=1 00:10:59.365 --rc genhtml_legend=1 00:10:59.365 --rc geninfo_all_blocks=1 00:10:59.365 --rc geninfo_unexecuted_blocks=1 00:10:59.365 00:10:59.365 ' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.365 --rc genhtml_branch_coverage=1 00:10:59.365 --rc genhtml_function_coverage=1 00:10:59.365 --rc genhtml_legend=1 00:10:59.365 --rc geninfo_all_blocks=1 00:10:59.365 --rc geninfo_unexecuted_blocks=1 00:10:59.365 00:10:59.365 ' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.365 --rc genhtml_branch_coverage=1 00:10:59.365 --rc genhtml_function_coverage=1 00:10:59.365 --rc genhtml_legend=1 00:10:59.365 --rc geninfo_all_blocks=1 00:10:59.365 --rc geninfo_unexecuted_blocks=1 00:10:59.365 00:10:59.365 ' 00:10:59.365 00:05:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:59.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.365 --rc genhtml_branch_coverage=1 00:10:59.365 --rc genhtml_function_coverage=1 00:10:59.365 --rc genhtml_legend=1 00:10:59.365 --rc geninfo_all_blocks=1 00:10:59.365 --rc geninfo_unexecuted_blocks=1 00:10:59.365 00:10:59.365 ' 00:10:59.365 00:05:13 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:59.365 00:05:13 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:59.365 00:05:13 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:59.365 00:05:13 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:59.365 00:05:13 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.365 00:05:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.365 00:05:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.365 00:05:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.365 00:05:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.365 00:05:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.365 00:05:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.365 00:05:13 -- paths/export.sh@5 -- # export PATH 00:10:59.365 00:05:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.365 00:05:13 -- nvme/functions.sh@10 -- # ctrls=() 00:10:59.365 00:05:13 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:59.365 00:05:13 -- nvme/functions.sh@11 -- # nvmes=() 00:10:59.366 00:05:13 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:59.366 00:05:13 -- nvme/functions.sh@12 -- # bdfs=() 00:10:59.366 00:05:13 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:59.366 00:05:13 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:59.366 00:05:13 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:59.366 00:05:13 -- nvme/functions.sh@14 -- # nvme_name= 00:10:59.366 00:05:13 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.366 00:05:13 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:59.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.934 Waiting for block devices as requested 00:10:59.934 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.934 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.934 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.934 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.210 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:05.210 00:05:19 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:05.210 00:05:19 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:05.210 00:05:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:05.210 00:05:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:05.210 00:05:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:05.210 00:05:19 -- scripts/common.sh@15 -- # local i 00:11:05.210 00:05:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:05.210 00:05:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.210 00:05:19 -- scripts/common.sh@24 -- # return 0 00:11:05.210 00:05:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:05.210 00:05:19 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:05.210 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.210 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.210 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.210 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:05.210 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.211 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.211 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.211 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.212 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.212 00:05:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:05.212 00:05:19 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:05.213 00:05:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:05.213 00:05:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:05.213 00:05:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:05.213 00:05:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:05.213 00:05:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:05.213 00:05:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:05.213 00:05:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:05.213 00:05:19 -- scripts/common.sh@15 -- # local i 00:11:05.213 00:05:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:05.213 00:05:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.213 00:05:19 -- scripts/common.sh@24 -- # return 0 00:11:05.213 00:05:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:05.213 00:05:19 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:05.213 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.213 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.213 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.213 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.213 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.214 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:05.214 00:05:19 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:05.214 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.215 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.215 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:05.215 00:05:19 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:05.216 00:05:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:05.216 00:05:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:05.216 00:05:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:05.216 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.216 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.216 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:05.216 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.216 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:05.217 00:05:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:05.217 00:05:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:05.217 00:05:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:05.217 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.217 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.217 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.217 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:05.217 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.218 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.218 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:05.218 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:05.219 00:05:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:05.219 00:05:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:05.219 00:05:19 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:05.219 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.219 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.219 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.219 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:05.219 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:05.220 00:05:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:05.220 00:05:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:05.220 00:05:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:05.220 00:05:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:05.220 00:05:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:05.220 00:05:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:05.220 00:05:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:05.220 00:05:19 -- scripts/common.sh@15 -- # local i 00:11:05.220 00:05:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:05.220 00:05:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.220 00:05:19 -- scripts/common.sh@24 -- # return 0 00:11:05.220 00:05:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:05.220 00:05:19 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:05.220 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.220 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.220 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:05.220 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:05.220 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.221 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:05.221 00:05:19 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.221 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.222 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.222 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:05.222 00:05:19 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:05.223 00:05:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:05.223 00:05:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:05.223 00:05:19 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:05.223 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.223 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.223 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:05.223 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:05.223 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.224 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:05.224 00:05:19 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:05.224 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:05.485 00:05:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:05.485 00:05:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:05.485 00:05:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:05.485 00:05:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:05.485 00:05:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:05.485 00:05:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:05.485 00:05:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:05.485 00:05:19 -- scripts/common.sh@15 -- # local i 00:11:05.485 00:05:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:05.485 00:05:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.485 00:05:19 -- scripts/common.sh@24 -- # return 0 00:11:05.485 00:05:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:05.485 00:05:19 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:05.485 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.485 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:05.485 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.485 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.485 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.486 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.486 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.486 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.487 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.487 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:05.487 00:05:19 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:05.488 00:05:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:05.488 00:05:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:05.488 00:05:19 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:05.488 00:05:19 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@18 -- # shift 00:11:05.488 00:05:19 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:05.488 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.488 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.488 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:05.489 00:05:19 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # IFS=: 00:11:05.489 00:05:19 -- nvme/functions.sh@21 -- # read -r reg val 00:11:05.489 00:05:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:05.489 00:05:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:05.489 00:05:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:05.489 00:05:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:05.489 00:05:19 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:05.489 00:05:19 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:05.489 00:05:19 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:05.489 00:05:19 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:05.489 00:05:19 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:05.489 00:05:19 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:05.489 00:05:19 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:05.489 00:05:19 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:05.489 00:05:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:05.489 00:05:19 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:05.489 00:05:19 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:05.489 00:05:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:05.489 00:05:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:05.489 00:05:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:05.489 00:05:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:05.489 00:05:19 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:05.489 00:05:19 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:05.489 00:05:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:05.489 00:05:19 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:05.489 00:05:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:05.489 00:05:19 -- nvme/functions.sh@197 -- # echo nvme0 00:11:05.489 00:05:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:05.489 00:05:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:05.489 00:05:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:05.489 00:05:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:05.489 00:05:19 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:05.489 00:05:19 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:05.489 00:05:19 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:05.489 00:05:19 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:05.489 00:05:19 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:05.489 00:05:19 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:05.489 00:05:19 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:05.489 00:05:19 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:05.489 00:05:19 -- nvme/functions.sh@204 -- # trap - ERR 00:11:05.489 00:05:19 -- nvme/functions.sh@204 -- # print_backtrace 00:11:05.490 00:05:19 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:05.490 00:05:19 -- common/autotest_common.sh@1142 -- # return 0 00:11:05.490 00:05:19 -- nvme/functions.sh@204 -- # trap - ERR 00:11:05.490 00:05:19 -- nvme/functions.sh@204 -- # print_backtrace 00:11:05.490 00:05:19 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:05.490 00:05:19 -- common/autotest_common.sh@1142 -- # return 0 00:11:05.490 00:05:19 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:05.490 00:05:19 -- nvme/functions.sh@206 -- # echo nvme0 00:11:05.490 00:05:19 -- nvme/functions.sh@207 -- # return 0 00:11:05.490 00:05:19 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:05.490 00:05:19 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:05.490 00:05:19 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:06.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.318 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.318 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.318 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.318 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:06.318 00:05:20 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:06.318 00:05:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:06.318 00:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.318 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.318 ************************************ 00:11:06.318 START TEST nvme_flexible_data_placement 00:11:06.318 ************************************ 00:11:06.318 00:05:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:06.578 Initializing NVMe Controllers 00:11:06.578 Attaching to 0000:00:09.0 00:11:06.578 Controller supports FDP Attached to 0000:00:09.0 00:11:06.578 Namespace ID: 1 Endurance Group ID: 1 00:11:06.579 Initialization complete. 00:11:06.579 00:11:06.579 ================================== 00:11:06.579 == FDP tests for Namespace: #01 == 00:11:06.579 ================================== 00:11:06.579 00:11:06.579 Get Feature: FDP: 00:11:06.579 ================= 00:11:06.579 Enabled: Yes 00:11:06.579 FDP configuration Index: 0 00:11:06.579 00:11:06.579 FDP configurations log page 00:11:06.579 =========================== 00:11:06.579 Number of FDP configurations: 1 00:11:06.579 Version: 0 00:11:06.579 Size: 112 00:11:06.579 FDP Configuration Descriptor: 0 00:11:06.579 Descriptor Size: 96 00:11:06.579 Reclaim Group Identifier format: 2 00:11:06.579 FDP Volatile Write Cache: Not Present 00:11:06.579 FDP Configuration: Valid 00:11:06.579 Vendor Specific Size: 0 00:11:06.579 Number of Reclaim Groups: 2 00:11:06.579 Number of Recalim Unit Handles: 8 00:11:06.579 Max Placement Identifiers: 128 00:11:06.579 Number of Namespaces Suppprted: 256 00:11:06.579 Reclaim unit Nominal Size: 6000000 bytes 00:11:06.579 Estimated Reclaim Unit Time Limit: Not Reported 00:11:06.579 RUH Desc #000: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #001: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #002: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #003: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #004: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #005: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #006: RUH Type: Initially Isolated 00:11:06.579 RUH Desc #007: RUH Type: Initially Isolated 00:11:06.579 00:11:06.579 FDP reclaim unit handle usage log page 00:11:06.579 ====================================== 00:11:06.579 Number of Reclaim Unit Handles: 8 00:11:06.579 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:06.579 RUH Usage Desc #001: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #002: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #003: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #004: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #005: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #006: RUH Attributes: Unused 00:11:06.579 RUH Usage Desc #007: RUH Attributes: Unused 00:11:06.579 00:11:06.579 FDP statistics log page 00:11:06.579 ======================= 00:11:06.579 Host bytes with metadata written: 2038050816 00:11:06.579 Media bytes with metadata written: 2039332864 00:11:06.579 Media bytes erased: 0 00:11:06.579 00:11:06.579 FDP Reclaim unit handle status 00:11:06.579 ============================== 00:11:06.579 Number of RUHS descriptors: 2 00:11:06.579 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000485d 00:11:06.579 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:06.579 00:11:06.579 FDP write on placement id: 0 success 00:11:06.579 00:11:06.579 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:06.579 00:11:06.579 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:06.579 00:11:06.579 Get Feature: FDP Events for Placement handle: #0 00:11:06.579 ======================== 00:11:06.579 Number of FDP Events: 6 00:11:06.579 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:06.579 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:06.579 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:06.579 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:06.579 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:06.579 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:06.579 00:11:06.579 FDP events log page 00:11:06.579 =================== 00:11:06.579 Number of FDP events: 1 00:11:06.579 FDP Event #0: 00:11:06.579 Event Type: RU Not Written to Capacity 00:11:06.579 Placement Identifier: Valid 00:11:06.579 NSID: Valid 00:11:06.579 Location: Valid 00:11:06.579 Placement Identifier: 0 00:11:06.579 Event Timestamp: 5 00:11:06.579 Namespace Identifier: 1 00:11:06.579 Reclaim Group Identifier: 0 00:11:06.579 Reclaim Unit Handle Identifier: 0 00:11:06.579 00:11:06.579 FDP test passed 00:11:06.579 00:11:06.579 real 0m0.220s 00:11:06.579 user 0m0.061s 00:11:06.579 sys 0m0.057s 00:11:06.579 00:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:06.579 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:06.579 ************************************ 00:11:06.579 END TEST nvme_flexible_data_placement 00:11:06.579 ************************************ 00:11:06.579 ************************************ 00:11:06.579 END TEST nvme_fdp 00:11:06.579 ************************************ 00:11:06.579 00:11:06.579 real 0m7.415s 00:11:06.579 user 0m1.001s 00:11:06.579 sys 0m1.347s 00:11:06.579 00:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:06.579 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:06.579 00:05:21 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:06.579 00:05:21 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:06.579 00:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:06.579 00:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.579 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 ************************************ 00:11:06.839 START TEST nvme_rpc 00:11:06.839 ************************************ 00:11:06.839 00:05:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:06.839 * Looking for test storage... 00:11:06.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:06.839 00:05:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:06.839 00:05:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:06.839 00:05:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:06.839 00:05:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:06.839 00:05:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:06.839 00:05:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:06.839 00:05:21 -- scripts/common.sh@335 -- # IFS=.-: 00:11:06.839 00:05:21 -- scripts/common.sh@335 -- # read -ra ver1 00:11:06.839 00:05:21 -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.839 00:05:21 -- scripts/common.sh@336 -- # read -ra ver2 00:11:06.839 00:05:21 -- scripts/common.sh@337 -- # local 'op=<' 00:11:06.839 00:05:21 -- scripts/common.sh@339 -- # ver1_l=2 00:11:06.839 00:05:21 -- scripts/common.sh@340 -- # ver2_l=1 00:11:06.839 00:05:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:06.839 00:05:21 -- scripts/common.sh@343 -- # case "$op" in 00:11:06.839 00:05:21 -- scripts/common.sh@344 -- # : 1 00:11:06.839 00:05:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:06.839 00:05:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.839 00:05:21 -- scripts/common.sh@364 -- # decimal 1 00:11:06.839 00:05:21 -- scripts/common.sh@352 -- # local d=1 00:11:06.839 00:05:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.839 00:05:21 -- scripts/common.sh@354 -- # echo 1 00:11:06.839 00:05:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:06.839 00:05:21 -- scripts/common.sh@365 -- # decimal 2 00:11:06.839 00:05:21 -- scripts/common.sh@352 -- # local d=2 00:11:06.839 00:05:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.839 00:05:21 -- scripts/common.sh@354 -- # echo 2 00:11:06.839 00:05:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:06.839 00:05:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:06.839 00:05:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:06.839 00:05:21 -- scripts/common.sh@367 -- # return 0 00:11:06.839 00:05:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.839 --rc genhtml_branch_coverage=1 00:11:06.839 --rc genhtml_function_coverage=1 00:11:06.839 --rc genhtml_legend=1 00:11:06.839 --rc geninfo_all_blocks=1 00:11:06.839 --rc geninfo_unexecuted_blocks=1 00:11:06.839 00:11:06.839 ' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.839 --rc genhtml_branch_coverage=1 00:11:06.839 --rc genhtml_function_coverage=1 00:11:06.839 --rc genhtml_legend=1 00:11:06.839 --rc geninfo_all_blocks=1 00:11:06.839 --rc geninfo_unexecuted_blocks=1 00:11:06.839 00:11:06.839 ' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.839 --rc genhtml_branch_coverage=1 00:11:06.839 --rc genhtml_function_coverage=1 00:11:06.839 --rc genhtml_legend=1 00:11:06.839 --rc geninfo_all_blocks=1 00:11:06.839 --rc geninfo_unexecuted_blocks=1 00:11:06.839 00:11:06.839 ' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:06.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.839 --rc genhtml_branch_coverage=1 00:11:06.839 --rc genhtml_function_coverage=1 00:11:06.839 --rc genhtml_legend=1 00:11:06.839 --rc geninfo_all_blocks=1 00:11:06.839 --rc geninfo_unexecuted_blocks=1 00:11:06.839 00:11:06.839 ' 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:06.839 00:05:21 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:06.839 00:05:21 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:06.839 00:05:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:06.839 00:05:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:06.839 00:05:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:06.839 00:05:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:06.839 00:05:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:06.839 00:05:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:06.839 00:05:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:06.839 00:05:21 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:06.839 00:05:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:06.839 00:05:21 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:06.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78253 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:06.839 00:05:21 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78253 00:11:06.839 00:05:21 -- common/autotest_common.sh@829 -- # '[' -z 78253 ']' 00:11:06.839 00:05:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.839 00:05:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.839 00:05:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.839 00:05:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.839 00:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 [2024-11-28 00:05:21.437925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.839 [2024-11-28 00:05:21.438038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78253 ] 00:11:07.102 [2024-11-28 00:05:21.583145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:07.102 [2024-11-28 00:05:21.613936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:07.102 [2024-11-28 00:05:21.614323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.102 [2024-11-28 00:05:21.614418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.674 00:05:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.674 00:05:22 -- common/autotest_common.sh@862 -- # return 0 00:11:07.674 00:05:22 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:07.934 Nvme0n1 00:11:07.934 00:05:22 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:07.934 00:05:22 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:08.194 request: 00:11:08.194 { 00:11:08.194 "filename": "non_existing_file", 00:11:08.194 "bdev_name": "Nvme0n1", 00:11:08.194 "method": "bdev_nvme_apply_firmware", 00:11:08.194 "req_id": 1 00:11:08.194 } 00:11:08.194 Got JSON-RPC error response 00:11:08.194 response: 00:11:08.194 { 00:11:08.194 "code": -32603, 00:11:08.194 "message": "open file failed." 00:11:08.194 } 00:11:08.194 00:05:22 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:08.194 00:05:22 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:08.194 00:05:22 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:08.456 00:05:22 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:08.456 00:05:22 -- nvme/nvme_rpc.sh@40 -- # killprocess 78253 00:11:08.456 00:05:22 -- common/autotest_common.sh@936 -- # '[' -z 78253 ']' 00:11:08.456 00:05:22 -- common/autotest_common.sh@940 -- # kill -0 78253 00:11:08.456 00:05:22 -- common/autotest_common.sh@941 -- # uname 00:11:08.456 00:05:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.456 00:05:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78253 00:11:08.456 killing process with pid 78253 00:11:08.456 00:05:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.456 00:05:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.456 00:05:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78253' 00:11:08.456 00:05:22 -- common/autotest_common.sh@955 -- # kill 78253 00:11:08.456 00:05:22 -- common/autotest_common.sh@960 -- # wait 78253 00:11:08.716 ************************************ 00:11:08.716 END TEST nvme_rpc 00:11:08.716 ************************************ 00:11:08.716 00:11:08.716 real 0m1.927s 00:11:08.716 user 0m3.736s 00:11:08.716 sys 0m0.422s 00:11:08.716 00:05:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.716 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:11:08.716 00:05:23 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.716 00:05:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.716 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:11:08.716 ************************************ 00:11:08.716 START TEST nvme_rpc_timeouts 00:11:08.716 ************************************ 00:11:08.716 00:05:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.716 * Looking for test storage... 00:11:08.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.716 00:05:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.716 00:05:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.716 00:05:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.716 00:05:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.716 00:05:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.716 00:05:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.716 00:05:23 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.716 00:05:23 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.716 00:05:23 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.716 00:05:23 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.716 00:05:23 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.716 00:05:23 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.716 00:05:23 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.716 00:05:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.716 00:05:23 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.716 00:05:23 -- scripts/common.sh@344 -- # : 1 00:11:08.716 00:05:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.716 00:05:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.716 00:05:23 -- scripts/common.sh@364 -- # decimal 1 00:11:08.716 00:05:23 -- scripts/common.sh@352 -- # local d=1 00:11:08.716 00:05:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.716 00:05:23 -- scripts/common.sh@354 -- # echo 1 00:11:08.716 00:05:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:08.716 00:05:23 -- scripts/common.sh@365 -- # decimal 2 00:11:08.716 00:05:23 -- scripts/common.sh@352 -- # local d=2 00:11:08.716 00:05:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.716 00:05:23 -- scripts/common.sh@354 -- # echo 2 00:11:08.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.716 00:05:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:08.716 00:05:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:08.716 00:05:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:08.716 00:05:23 -- scripts/common.sh@367 -- # return 0 00:11:08.716 00:05:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.716 --rc genhtml_branch_coverage=1 00:11:08.716 --rc genhtml_function_coverage=1 00:11:08.716 --rc genhtml_legend=1 00:11:08.716 --rc geninfo_all_blocks=1 00:11:08.716 --rc geninfo_unexecuted_blocks=1 00:11:08.716 00:11:08.716 ' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.716 --rc genhtml_branch_coverage=1 00:11:08.716 --rc genhtml_function_coverage=1 00:11:08.716 --rc genhtml_legend=1 00:11:08.716 --rc geninfo_all_blocks=1 00:11:08.716 --rc geninfo_unexecuted_blocks=1 00:11:08.716 00:11:08.716 ' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.716 --rc genhtml_branch_coverage=1 00:11:08.716 --rc genhtml_function_coverage=1 00:11:08.716 --rc genhtml_legend=1 00:11:08.716 --rc geninfo_all_blocks=1 00:11:08.716 --rc geninfo_unexecuted_blocks=1 00:11:08.716 00:11:08.716 ' 00:11:08.716 00:05:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.716 --rc genhtml_branch_coverage=1 00:11:08.716 --rc genhtml_function_coverage=1 00:11:08.716 --rc genhtml_legend=1 00:11:08.716 --rc geninfo_all_blocks=1 00:11:08.716 --rc geninfo_unexecuted_blocks=1 00:11:08.716 00:11:08.716 ' 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78301 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78301 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78333 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:08.716 00:05:23 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78333 00:11:08.716 00:05:23 -- common/autotest_common.sh@829 -- # '[' -z 78333 ']' 00:11:08.716 00:05:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.716 00:05:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.716 00:05:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.716 00:05:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.717 00:05:23 -- common/autotest_common.sh@10 -- # set +x 00:11:08.717 00:05:23 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:08.977 [2024-11-28 00:05:23.348833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.977 [2024-11-28 00:05:23.348955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78333 ] 00:11:08.977 [2024-11-28 00:05:23.495695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.977 [2024-11-28 00:05:23.525451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.977 [2024-11-28 00:05:23.526062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.977 [2024-11-28 00:05:23.526094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.917 Checking default timeout settings: 00:11:09.917 00:05:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.917 00:05:24 -- common/autotest_common.sh@862 -- # return 0 00:11:09.917 00:05:24 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:09.917 00:05:24 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:09.917 Making settings changes with rpc: 00:11:09.917 00:05:24 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:09.917 00:05:24 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:10.178 Check default vs. modified settings: 00:11:10.178 00:05:24 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:10.178 00:05:24 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78301 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78301 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.439 Setting action_on_timeout is changed as expected. 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78301 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78301 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.439 Setting timeout_us is changed as expected. 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.439 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78301 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78301 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.440 Setting timeout_admin_us is changed as expected. 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78301 /tmp/settings_modified_78301 00:11:10.440 00:05:24 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78333 00:11:10.440 00:05:24 -- common/autotest_common.sh@936 -- # '[' -z 78333 ']' 00:11:10.440 00:05:24 -- common/autotest_common.sh@940 -- # kill -0 78333 00:11:10.440 00:05:24 -- common/autotest_common.sh@941 -- # uname 00:11:10.440 00:05:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.440 00:05:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78333 00:11:10.440 killing process with pid 78333 00:11:10.440 00:05:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:10.440 00:05:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:10.440 00:05:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78333' 00:11:10.440 00:05:24 -- common/autotest_common.sh@955 -- # kill 78333 00:11:10.440 00:05:24 -- common/autotest_common.sh@960 -- # wait 78333 00:11:10.698 RPC TIMEOUT SETTING TEST PASSED. 00:11:10.698 00:05:25 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:10.698 ************************************ 00:11:10.698 END TEST nvme_rpc_timeouts 00:11:10.698 ************************************ 00:11:10.698 00:11:10.698 real 0m2.083s 00:11:10.698 user 0m4.158s 00:11:10.698 sys 0m0.429s 00:11:10.698 00:05:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.698 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:11:10.698 00:05:25 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:10.698 00:05:25 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:10.698 00:05:25 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:10.698 00:05:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.698 00:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.698 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:11:10.698 ************************************ 00:11:10.698 START TEST nvme_xnvme 00:11:10.698 ************************************ 00:11:10.698 00:05:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:10.959 * Looking for test storage... 00:11:10.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:10.959 00:05:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.959 00:05:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.959 00:05:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.959 00:05:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:10.959 00:05:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:10.959 00:05:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:10.959 00:05:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:10.959 00:05:25 -- scripts/common.sh@335 -- # IFS=.-: 00:11:10.959 00:05:25 -- scripts/common.sh@335 -- # read -ra ver1 00:11:10.959 00:05:25 -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.959 00:05:25 -- scripts/common.sh@336 -- # read -ra ver2 00:11:10.959 00:05:25 -- scripts/common.sh@337 -- # local 'op=<' 00:11:10.959 00:05:25 -- scripts/common.sh@339 -- # ver1_l=2 00:11:10.959 00:05:25 -- scripts/common.sh@340 -- # ver2_l=1 00:11:10.959 00:05:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:10.959 00:05:25 -- scripts/common.sh@343 -- # case "$op" in 00:11:10.959 00:05:25 -- scripts/common.sh@344 -- # : 1 00:11:10.959 00:05:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:10.959 00:05:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.959 00:05:25 -- scripts/common.sh@364 -- # decimal 1 00:11:10.959 00:05:25 -- scripts/common.sh@352 -- # local d=1 00:11:10.959 00:05:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.959 00:05:25 -- scripts/common.sh@354 -- # echo 1 00:11:10.959 00:05:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:10.960 00:05:25 -- scripts/common.sh@365 -- # decimal 2 00:11:10.960 00:05:25 -- scripts/common.sh@352 -- # local d=2 00:11:10.960 00:05:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.960 00:05:25 -- scripts/common.sh@354 -- # echo 2 00:11:10.960 00:05:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:10.960 00:05:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:10.960 00:05:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:10.960 00:05:25 -- scripts/common.sh@367 -- # return 0 00:11:10.960 00:05:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.960 00:05:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.960 --rc genhtml_branch_coverage=1 00:11:10.960 --rc genhtml_function_coverage=1 00:11:10.960 --rc genhtml_legend=1 00:11:10.960 --rc geninfo_all_blocks=1 00:11:10.960 --rc geninfo_unexecuted_blocks=1 00:11:10.960 00:11:10.960 ' 00:11:10.960 00:05:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.960 --rc genhtml_branch_coverage=1 00:11:10.960 --rc genhtml_function_coverage=1 00:11:10.960 --rc genhtml_legend=1 00:11:10.960 --rc geninfo_all_blocks=1 00:11:10.960 --rc geninfo_unexecuted_blocks=1 00:11:10.960 00:11:10.960 ' 00:11:10.960 00:05:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.960 --rc genhtml_branch_coverage=1 00:11:10.960 --rc genhtml_function_coverage=1 00:11:10.960 --rc genhtml_legend=1 00:11:10.960 --rc geninfo_all_blocks=1 00:11:10.960 --rc geninfo_unexecuted_blocks=1 00:11:10.960 00:11:10.960 ' 00:11:10.960 00:05:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:10.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.960 --rc genhtml_branch_coverage=1 00:11:10.960 --rc genhtml_function_coverage=1 00:11:10.960 --rc genhtml_legend=1 00:11:10.960 --rc geninfo_all_blocks=1 00:11:10.960 --rc geninfo_unexecuted_blocks=1 00:11:10.960 00:11:10.960 ' 00:11:10.960 00:05:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.960 00:05:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.960 00:05:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.960 00:05:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.960 00:05:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.960 00:05:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.960 00:05:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.960 00:05:25 -- paths/export.sh@5 -- # export PATH 00:11:10.960 00:05:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:10.960 00:05:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.960 00:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.960 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:11:10.960 ************************************ 00:11:10.960 START TEST xnvme_to_malloc_dd_copy 00:11:10.960 ************************************ 00:11:10.960 00:05:25 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:10.960 00:05:25 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:10.960 00:05:25 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:10.960 00:05:25 -- dd/common.sh@191 -- # return 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@18 -- # local io 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:10.960 00:05:25 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:10.960 00:05:25 -- dd/common.sh@31 -- # xtrace_disable 00:11:10.960 00:05:25 -- common/autotest_common.sh@10 -- # set +x 00:11:10.960 { 00:11:10.960 "subsystems": [ 00:11:10.960 { 00:11:10.960 "subsystem": "bdev", 00:11:10.960 "config": [ 00:11:10.960 { 00:11:10.960 "params": { 00:11:10.960 "block_size": 512, 00:11:10.960 "num_blocks": 2097152, 00:11:10.960 "name": "malloc0" 00:11:10.960 }, 00:11:10.960 "method": "bdev_malloc_create" 00:11:10.960 }, 00:11:10.960 { 00:11:10.960 "params": { 00:11:10.960 "io_mechanism": "libaio", 00:11:10.960 "filename": "/dev/nullb0", 00:11:10.960 "name": "null0" 00:11:10.960 }, 00:11:10.960 "method": "bdev_xnvme_create" 00:11:10.960 }, 00:11:10.960 { 00:11:10.960 "method": "bdev_wait_for_examine" 00:11:10.960 } 00:11:10.960 ] 00:11:10.960 } 00:11:10.960 ] 00:11:10.960 } 00:11:10.960 [2024-11-28 00:05:25.495632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:10.960 [2024-11-28 00:05:25.495825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78456 ] 00:11:11.220 [2024-11-28 00:05:25.644647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.220 [2024-11-28 00:05:25.673828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.607  [2024-11-28T00:05:28.153Z] Copying: 237/1024 [MB] (237 MBps) [2024-11-28T00:05:29.092Z] Copying: 469/1024 [MB] (232 MBps) [2024-11-28T00:05:30.028Z] Copying: 702/1024 [MB] (233 MBps) [2024-11-28T00:05:30.028Z] Copying: 1004/1024 [MB] (301 MBps) [2024-11-28T00:05:30.595Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:11:15.993 00:11:15.993 00:05:30 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:15.993 00:05:30 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:15.993 00:05:30 -- dd/common.sh@31 -- # xtrace_disable 00:11:15.993 00:05:30 -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 { 00:11:15.993 "subsystems": [ 00:11:15.993 { 00:11:15.993 "subsystem": "bdev", 00:11:15.993 "config": [ 00:11:15.993 { 00:11:15.993 "params": { 00:11:15.993 "block_size": 512, 00:11:15.993 "num_blocks": 2097152, 00:11:15.993 "name": "malloc0" 00:11:15.993 }, 00:11:15.993 "method": "bdev_malloc_create" 00:11:15.993 }, 00:11:15.993 { 00:11:15.993 "params": { 00:11:15.993 "io_mechanism": "libaio", 00:11:15.993 "filename": "/dev/nullb0", 00:11:15.993 "name": "null0" 00:11:15.993 }, 00:11:15.993 "method": "bdev_xnvme_create" 00:11:15.993 }, 00:11:15.993 { 00:11:15.993 "method": "bdev_wait_for_examine" 00:11:15.993 } 00:11:15.993 ] 00:11:15.993 } 00:11:15.993 ] 00:11:15.993 } 00:11:15.993 [2024-11-28 00:05:30.356439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:15.993 [2024-11-28 00:05:30.356543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78514 ] 00:11:15.993 [2024-11-28 00:05:30.503069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.993 [2024-11-28 00:05:30.530220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.389  [2024-11-28T00:05:32.927Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-28T00:05:33.863Z] Copying: 643/1024 [MB] (322 MBps) [2024-11-28T00:05:34.122Z] Copying: 966/1024 [MB] (322 MBps) [2024-11-28T00:05:34.380Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:11:19.778 00:11:19.778 00:05:34 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:19.778 00:05:34 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:19.778 00:05:34 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:19.778 00:05:34 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:19.778 00:05:34 -- dd/common.sh@31 -- # xtrace_disable 00:11:19.778 00:05:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.778 { 00:11:19.778 "subsystems": [ 00:11:19.778 { 00:11:19.778 "subsystem": "bdev", 00:11:19.778 "config": [ 00:11:19.778 { 00:11:19.778 "params": { 00:11:19.778 "block_size": 512, 00:11:19.778 "num_blocks": 2097152, 00:11:19.778 "name": "malloc0" 00:11:19.778 }, 00:11:19.778 "method": "bdev_malloc_create" 00:11:19.778 }, 00:11:19.778 { 00:11:19.778 "params": { 00:11:19.779 "io_mechanism": "io_uring", 00:11:19.779 "filename": "/dev/nullb0", 00:11:19.779 "name": "null0" 00:11:19.779 }, 00:11:19.779 "method": "bdev_xnvme_create" 00:11:19.779 }, 00:11:19.779 { 00:11:19.779 "method": "bdev_wait_for_examine" 00:11:19.779 } 00:11:19.779 ] 00:11:19.779 } 00:11:19.779 ] 00:11:19.779 } 00:11:19.779 [2024-11-28 00:05:34.327331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:19.779 [2024-11-28 00:05:34.327449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78564 ] 00:11:20.037 [2024-11-28 00:05:34.474477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.037 [2024-11-28 00:05:34.502500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.418  [2024-11-28T00:05:36.957Z] Copying: 326/1024 [MB] (326 MBps) [2024-11-28T00:05:37.892Z] Copying: 654/1024 [MB] (327 MBps) [2024-11-28T00:05:37.892Z] Copying: 981/1024 [MB] (327 MBps) [2024-11-28T00:05:38.150Z] Copying: 1024/1024 [MB] (average 327 MBps) 00:11:23.548 00:11:23.807 00:05:38 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:23.808 00:05:38 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:23.808 00:05:38 -- dd/common.sh@31 -- # xtrace_disable 00:11:23.808 00:05:38 -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 { 00:11:23.808 "subsystems": [ 00:11:23.808 { 00:11:23.808 "subsystem": "bdev", 00:11:23.808 "config": [ 00:11:23.808 { 00:11:23.808 "params": { 00:11:23.808 "block_size": 512, 00:11:23.808 "num_blocks": 2097152, 00:11:23.808 "name": "malloc0" 00:11:23.808 }, 00:11:23.808 "method": "bdev_malloc_create" 00:11:23.808 }, 00:11:23.808 { 00:11:23.808 "params": { 00:11:23.808 "io_mechanism": "io_uring", 00:11:23.808 "filename": "/dev/nullb0", 00:11:23.808 "name": "null0" 00:11:23.808 }, 00:11:23.808 "method": "bdev_xnvme_create" 00:11:23.808 }, 00:11:23.808 { 00:11:23.808 "method": "bdev_wait_for_examine" 00:11:23.808 } 00:11:23.808 ] 00:11:23.808 } 00:11:23.808 ] 00:11:23.808 } 00:11:23.808 [2024-11-28 00:05:38.208709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.808 [2024-11-28 00:05:38.208932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78618 ] 00:11:23.808 [2024-11-28 00:05:38.352522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.808 [2024-11-28 00:05:38.379854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.184  [2024-11-28T00:05:40.721Z] Copying: 328/1024 [MB] (328 MBps) [2024-11-28T00:05:41.657Z] Copying: 656/1024 [MB] (327 MBps) [2024-11-28T00:05:41.915Z] Copying: 985/1024 [MB] (328 MBps) [2024-11-28T00:05:42.174Z] Copying: 1024/1024 [MB] (average 328 MBps) 00:11:27.572 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:27.572 00:05:42 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:27.572 00:11:27.572 real 0m16.635s 00:11:27.572 user 0m13.852s 00:11:27.572 sys 0m2.285s 00:11:27.572 00:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.572 00:05:42 -- common/autotest_common.sh@10 -- # set +x 00:11:27.572 ************************************ 00:11:27.572 END TEST xnvme_to_malloc_dd_copy 00:11:27.572 ************************************ 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:27.572 00:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.572 00:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.572 00:05:42 -- common/autotest_common.sh@10 -- # set +x 00:11:27.572 ************************************ 00:11:27.572 START TEST xnvme_bdevperf 00:11:27.572 ************************************ 00:11:27.572 00:05:42 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:27.572 00:05:42 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:27.572 00:05:42 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:27.572 00:05:42 -- dd/common.sh@191 -- # return 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@60 -- # local io 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:27.572 00:05:42 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:27.572 00:05:42 -- dd/common.sh@31 -- # xtrace_disable 00:11:27.572 00:05:42 -- common/autotest_common.sh@10 -- # set +x 00:11:27.831 { 00:11:27.831 "subsystems": [ 00:11:27.831 { 00:11:27.831 "subsystem": "bdev", 00:11:27.831 "config": [ 00:11:27.831 { 00:11:27.831 "params": { 00:11:27.831 "io_mechanism": "libaio", 00:11:27.831 "filename": "/dev/nullb0", 00:11:27.831 "name": "null0" 00:11:27.831 }, 00:11:27.831 "method": "bdev_xnvme_create" 00:11:27.831 }, 00:11:27.831 { 00:11:27.831 "method": "bdev_wait_for_examine" 00:11:27.831 } 00:11:27.831 ] 00:11:27.831 } 00:11:27.831 ] 00:11:27.831 } 00:11:27.831 [2024-11-28 00:05:42.207226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:27.831 [2024-11-28 00:05:42.207329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78695 ] 00:11:27.831 [2024-11-28 00:05:42.356719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.831 [2024-11-28 00:05:42.387328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.088 Running I/O for 5 seconds... 00:11:33.373 00:11:33.373 Latency(us) 00:11:33.373 [2024-11-28T00:05:47.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.373 [2024-11-28T00:05:47.975Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:33.373 null0 : 5.00 191094.86 746.46 0.00 0.00 332.48 107.13 463.16 00:11:33.373 [2024-11-28T00:05:47.975Z] =================================================================================================================== 00:11:33.373 [2024-11-28T00:05:47.976Z] Total : 191094.86 746.46 0.00 0.00 332.48 107.13 463.16 00:11:33.374 00:05:47 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:33.374 00:05:47 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:33.374 00:05:47 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:33.374 00:05:47 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:33.374 00:05:47 -- dd/common.sh@31 -- # xtrace_disable 00:11:33.374 00:05:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.374 { 00:11:33.374 "subsystems": [ 00:11:33.374 { 00:11:33.374 "subsystem": "bdev", 00:11:33.374 "config": [ 00:11:33.374 { 00:11:33.374 "params": { 00:11:33.374 "io_mechanism": "io_uring", 00:11:33.374 "filename": "/dev/nullb0", 00:11:33.374 "name": "null0" 00:11:33.374 }, 00:11:33.374 "method": "bdev_xnvme_create" 00:11:33.374 }, 00:11:33.374 { 00:11:33.374 "method": "bdev_wait_for_examine" 00:11:33.374 } 00:11:33.374 ] 00:11:33.374 } 00:11:33.374 ] 00:11:33.374 } 00:11:33.374 [2024-11-28 00:05:47.668638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:33.374 [2024-11-28 00:05:47.668861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78758 ] 00:11:33.374 [2024-11-28 00:05:47.813088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.374 [2024-11-28 00:05:47.839611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.374 Running I/O for 5 seconds... 00:11:38.640 00:11:38.640 Latency(us) 00:11:38.640 [2024-11-28T00:05:53.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.640 [2024-11-28T00:05:53.242Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:38.640 null0 : 5.00 244673.03 955.75 0.00 0.00 259.58 146.51 313.50 00:11:38.640 [2024-11-28T00:05:53.242Z] =================================================================================================================== 00:11:38.640 [2024-11-28T00:05:53.242Z] Total : 244673.03 955.75 0.00 0.00 259.58 146.51 313.50 00:11:38.640 00:05:53 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:38.640 00:05:53 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:38.640 00:11:38.640 real 0m10.942s 00:11:38.640 user 0m8.548s 00:11:38.640 sys 0m2.168s 00:11:38.640 00:05:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.640 ************************************ 00:11:38.640 END TEST xnvme_bdevperf 00:11:38.640 ************************************ 00:11:38.640 00:05:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.640 ************************************ 00:11:38.640 END TEST nvme_xnvme 00:11:38.640 ************************************ 00:11:38.640 00:11:38.640 real 0m27.836s 00:11:38.640 user 0m22.519s 00:11:38.640 sys 0m4.554s 00:11:38.640 00:05:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.640 00:05:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.640 00:05:53 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:38.640 00:05:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:38.640 00:05:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.640 00:05:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.640 ************************************ 00:11:38.640 START TEST blockdev_xnvme 00:11:38.640 ************************************ 00:11:38.640 00:05:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:38.640 * Looking for test storage... 00:11:38.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:38.640 00:05:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:38.640 00:05:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:38.640 00:05:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:38.899 00:05:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:38.899 00:05:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:38.899 00:05:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:38.899 00:05:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:38.899 00:05:53 -- scripts/common.sh@335 -- # IFS=.-: 00:11:38.899 00:05:53 -- scripts/common.sh@335 -- # read -ra ver1 00:11:38.899 00:05:53 -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.899 00:05:53 -- scripts/common.sh@336 -- # read -ra ver2 00:11:38.899 00:05:53 -- scripts/common.sh@337 -- # local 'op=<' 00:11:38.899 00:05:53 -- scripts/common.sh@339 -- # ver1_l=2 00:11:38.899 00:05:53 -- scripts/common.sh@340 -- # ver2_l=1 00:11:38.899 00:05:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:38.899 00:05:53 -- scripts/common.sh@343 -- # case "$op" in 00:11:38.899 00:05:53 -- scripts/common.sh@344 -- # : 1 00:11:38.899 00:05:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:38.899 00:05:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.899 00:05:53 -- scripts/common.sh@364 -- # decimal 1 00:11:38.899 00:05:53 -- scripts/common.sh@352 -- # local d=1 00:11:38.899 00:05:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.899 00:05:53 -- scripts/common.sh@354 -- # echo 1 00:11:38.899 00:05:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:38.899 00:05:53 -- scripts/common.sh@365 -- # decimal 2 00:11:38.899 00:05:53 -- scripts/common.sh@352 -- # local d=2 00:11:38.899 00:05:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.899 00:05:53 -- scripts/common.sh@354 -- # echo 2 00:11:38.899 00:05:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:38.899 00:05:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:38.899 00:05:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:38.899 00:05:53 -- scripts/common.sh@367 -- # return 0 00:11:38.899 00:05:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.899 00:05:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:38.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.899 --rc genhtml_branch_coverage=1 00:11:38.899 --rc genhtml_function_coverage=1 00:11:38.899 --rc genhtml_legend=1 00:11:38.899 --rc geninfo_all_blocks=1 00:11:38.899 --rc geninfo_unexecuted_blocks=1 00:11:38.899 00:11:38.899 ' 00:11:38.899 00:05:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:38.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.899 --rc genhtml_branch_coverage=1 00:11:38.899 --rc genhtml_function_coverage=1 00:11:38.899 --rc genhtml_legend=1 00:11:38.899 --rc geninfo_all_blocks=1 00:11:38.899 --rc geninfo_unexecuted_blocks=1 00:11:38.899 00:11:38.899 ' 00:11:38.899 00:05:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:38.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.899 --rc genhtml_branch_coverage=1 00:11:38.899 --rc genhtml_function_coverage=1 00:11:38.899 --rc genhtml_legend=1 00:11:38.899 --rc geninfo_all_blocks=1 00:11:38.899 --rc geninfo_unexecuted_blocks=1 00:11:38.899 00:11:38.899 ' 00:11:38.899 00:05:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:38.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.899 --rc genhtml_branch_coverage=1 00:11:38.899 --rc genhtml_function_coverage=1 00:11:38.899 --rc genhtml_legend=1 00:11:38.899 --rc geninfo_all_blocks=1 00:11:38.899 --rc geninfo_unexecuted_blocks=1 00:11:38.899 00:11:38.899 ' 00:11:38.899 00:05:53 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:38.899 00:05:53 -- bdev/nbd_common.sh@6 -- # set -e 00:11:38.899 00:05:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:38.899 00:05:53 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:38.899 00:05:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:38.899 00:05:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:38.899 00:05:53 -- bdev/blockdev.sh@18 -- # : 00:11:38.899 00:05:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:38.899 00:05:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:38.899 00:05:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:38.899 00:05:53 -- bdev/blockdev.sh@672 -- # uname -s 00:11:38.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.899 00:05:53 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:38.899 00:05:53 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:38.899 00:05:53 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:38.899 00:05:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:38.899 00:05:53 -- bdev/blockdev.sh@682 -- # dek= 00:11:38.899 00:05:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:38.899 00:05:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:38.899 00:05:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:38.899 00:05:53 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:38.899 00:05:53 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:38.899 00:05:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:38.899 00:05:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=78894 00:11:38.900 00:05:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:38.900 00:05:53 -- bdev/blockdev.sh@47 -- # waitforlisten 78894 00:11:38.900 00:05:53 -- common/autotest_common.sh@829 -- # '[' -z 78894 ']' 00:11:38.900 00:05:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.900 00:05:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.900 00:05:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.900 00:05:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.900 00:05:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:38.900 00:05:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.900 [2024-11-28 00:05:53.368018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.900 [2024-11-28 00:05:53.368127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78894 ] 00:11:39.158 [2024-11-28 00:05:53.512490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.158 [2024-11-28 00:05:53.539425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:39.158 [2024-11-28 00:05:53.539583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.725 00:05:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.725 00:05:54 -- common/autotest_common.sh@862 -- # return 0 00:11:39.725 00:05:54 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:39.725 00:05:54 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:39.725 00:05:54 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:39.725 00:05:54 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:39.725 00:05:54 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:39.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.241 Waiting for block devices as requested 00:11:40.241 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.241 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.241 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.241 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.510 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:45.510 00:05:59 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:45.511 00:05:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:45.511 00:05:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:45.511 00:05:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:45.511 00:05:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:45.511 00:05:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:45.511 00:05:59 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:45.511 00:05:59 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:45.511 nvme0n1 00:11:45.511 nvme1n1 00:11:45.511 nvme1n2 00:11:45.511 nvme1n3 00:11:45.511 nvme2n1 00:11:45.511 nvme3n1 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@738 -- # cat 00:11:45.511 00:05:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:05:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:45.511 00:05:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:45.511 00:05:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:45.511 00:05:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.511 00:05:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.511 00:05:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.511 00:06:00 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:45.511 00:06:00 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:45.511 00:06:00 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2d2cf691-b951-4e64-9ec1-a904dcc49e92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2d2cf691-b951-4e64-9ec1-a904dcc49e92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a75a4768-c76e-4a09-b3ca-b79fdde77f8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a75a4768-c76e-4a09-b3ca-b79fdde77f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "81fdae00-9274-4724-be75-457320cfe1af"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81fdae00-9274-4724-be75-457320cfe1af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "8321732e-0b98-47a8-9a6a-8f1cd7d0d2eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8321732e-0b98-47a8-9a6a-8f1cd7d0d2eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c18c3049-651a-433c-88a0-1ed37ed645de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c18c3049-651a-433c-88a0-1ed37ed645de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2a6e8439-a710-45b2-8481-e751175ea4ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2a6e8439-a710-45b2-8481-e751175ea4ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:45.512 00:06:00 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:45.512 00:06:00 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:45.512 00:06:00 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:45.512 00:06:00 -- bdev/blockdev.sh@752 -- # killprocess 78894 00:11:45.512 00:06:00 -- common/autotest_common.sh@936 -- # '[' -z 78894 ']' 00:11:45.512 00:06:00 -- common/autotest_common.sh@940 -- # kill -0 78894 00:11:45.512 00:06:00 -- common/autotest_common.sh@941 -- # uname 00:11:45.512 00:06:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.512 00:06:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78894 00:11:45.512 killing process with pid 78894 00:11:45.512 00:06:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.512 00:06:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.512 00:06:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78894' 00:11:45.512 00:06:00 -- common/autotest_common.sh@955 -- # kill 78894 00:11:45.512 00:06:00 -- common/autotest_common.sh@960 -- # wait 78894 00:11:45.770 00:06:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:45.770 00:06:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:45.770 00:06:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:45.770 00:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.770 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:11:45.770 ************************************ 00:11:45.770 START TEST bdev_hello_world 00:11:45.770 ************************************ 00:11:45.770 00:06:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:45.770 [2024-11-28 00:06:00.318541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.770 [2024-11-28 00:06:00.318777] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79264 ] 00:11:46.028 [2024-11-28 00:06:00.466509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.028 [2024-11-28 00:06:00.493425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.285 [2024-11-28 00:06:00.648126] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:46.285 [2024-11-28 00:06:00.648277] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:46.285 [2024-11-28 00:06:00.648309] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:46.285 [2024-11-28 00:06:00.649874] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:46.285 [2024-11-28 00:06:00.650225] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:46.285 [2024-11-28 00:06:00.650295] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:46.285 [2024-11-28 00:06:00.650530] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:46.285 00:11:46.285 [2024-11-28 00:06:00.650593] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:46.285 00:11:46.285 real 0m0.502s 00:11:46.285 user 0m0.253s 00:11:46.285 sys 0m0.142s 00:11:46.285 00:06:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.285 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.285 ************************************ 00:11:46.285 END TEST bdev_hello_world 00:11:46.285 ************************************ 00:11:46.285 00:06:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:46.285 00:06:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.285 00:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.285 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.285 ************************************ 00:11:46.285 START TEST bdev_bounds 00:11:46.285 ************************************ 00:11:46.285 00:06:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:46.285 00:06:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=79284 00:11:46.285 00:06:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:46.285 00:06:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 79284' 00:11:46.285 Process bdevio pid: 79284 00:11:46.285 00:06:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:46.285 00:06:00 -- bdev/blockdev.sh@291 -- # waitforlisten 79284 00:11:46.285 00:06:00 -- common/autotest_common.sh@829 -- # '[' -z 79284 ']' 00:11:46.285 00:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.285 00:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.285 00:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.285 00:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.285 00:06:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.285 [2024-11-28 00:06:00.877553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:46.285 [2024-11-28 00:06:00.877665] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79284 ] 00:11:46.543 [2024-11-28 00:06:01.023762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.543 [2024-11-28 00:06:01.052610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.543 [2024-11-28 00:06:01.052894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.543 [2024-11-28 00:06:01.053006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.109 00:06:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.109 00:06:01 -- common/autotest_common.sh@862 -- # return 0 00:11:47.109 00:06:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:47.368 I/O targets: 00:11:47.368 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:47.368 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:47.368 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:47.368 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:47.368 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:47.368 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:47.368 00:11:47.368 00:11:47.368 CUnit - A unit testing framework for C - Version 2.1-3 00:11:47.368 http://cunit.sourceforge.net/ 00:11:47.368 00:11:47.368 00:11:47.368 Suite: bdevio tests on: nvme3n1 00:11:47.368 Test: blockdev write read block ...passed 00:11:47.368 Test: blockdev write zeroes read block ...passed 00:11:47.368 Test: blockdev write zeroes read no split ...passed 00:11:47.368 Test: blockdev write zeroes read split ...passed 00:11:47.368 Test: blockdev write zeroes read split partial ...passed 00:11:47.368 Test: blockdev reset ...passed 00:11:47.368 Test: blockdev write read 8 blocks ...passed 00:11:47.368 Test: blockdev write read size > 128k ...passed 00:11:47.368 Test: blockdev write read invalid size ...passed 00:11:47.368 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.368 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.368 Test: blockdev write read max offset ...passed 00:11:47.368 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.368 Test: blockdev writev readv 8 blocks ...passed 00:11:47.368 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.368 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 Suite: bdevio tests on: nvme2n1 00:11:47.369 Test: blockdev write read block ...passed 00:11:47.369 Test: blockdev write zeroes read block ...passed 00:11:47.369 Test: blockdev write zeroes read no split ...passed 00:11:47.369 Test: blockdev write zeroes read split ...passed 00:11:47.369 Test: blockdev write zeroes read split partial ...passed 00:11:47.369 Test: blockdev reset ...passed 00:11:47.369 Test: blockdev write read 8 blocks ...passed 00:11:47.369 Test: blockdev write read size > 128k ...passed 00:11:47.369 Test: blockdev write read invalid size ...passed 00:11:47.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.369 Test: blockdev write read max offset ...passed 00:11:47.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.369 Test: blockdev writev readv 8 blocks ...passed 00:11:47.369 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.369 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 Suite: bdevio tests on: nvme1n3 00:11:47.369 Test: blockdev write read block ...passed 00:11:47.369 Test: blockdev write zeroes read block ...passed 00:11:47.369 Test: blockdev write zeroes read no split ...passed 00:11:47.369 Test: blockdev write zeroes read split ...passed 00:11:47.369 Test: blockdev write zeroes read split partial ...passed 00:11:47.369 Test: blockdev reset ...passed 00:11:47.369 Test: blockdev write read 8 blocks ...passed 00:11:47.369 Test: blockdev write read size > 128k ...passed 00:11:47.369 Test: blockdev write read invalid size ...passed 00:11:47.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.369 Test: blockdev write read max offset ...passed 00:11:47.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.369 Test: blockdev writev readv 8 blocks ...passed 00:11:47.369 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.369 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 Suite: bdevio tests on: nvme1n2 00:11:47.369 Test: blockdev write read block ...passed 00:11:47.369 Test: blockdev write zeroes read block ...passed 00:11:47.369 Test: blockdev write zeroes read no split ...passed 00:11:47.369 Test: blockdev write zeroes read split ...passed 00:11:47.369 Test: blockdev write zeroes read split partial ...passed 00:11:47.369 Test: blockdev reset ...passed 00:11:47.369 Test: blockdev write read 8 blocks ...passed 00:11:47.369 Test: blockdev write read size > 128k ...passed 00:11:47.369 Test: blockdev write read invalid size ...passed 00:11:47.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.369 Test: blockdev write read max offset ...passed 00:11:47.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.369 Test: blockdev writev readv 8 blocks ...passed 00:11:47.369 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.369 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 Suite: bdevio tests on: nvme1n1 00:11:47.369 Test: blockdev write read block ...passed 00:11:47.369 Test: blockdev write zeroes read block ...passed 00:11:47.369 Test: blockdev write zeroes read no split ...passed 00:11:47.369 Test: blockdev write zeroes read split ...passed 00:11:47.369 Test: blockdev write zeroes read split partial ...passed 00:11:47.369 Test: blockdev reset ...passed 00:11:47.369 Test: blockdev write read 8 blocks ...passed 00:11:47.369 Test: blockdev write read size > 128k ...passed 00:11:47.369 Test: blockdev write read invalid size ...passed 00:11:47.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.369 Test: blockdev write read max offset ...passed 00:11:47.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.369 Test: blockdev writev readv 8 blocks ...passed 00:11:47.369 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.369 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 Suite: bdevio tests on: nvme0n1 00:11:47.369 Test: blockdev write read block ...passed 00:11:47.369 Test: blockdev write zeroes read block ...passed 00:11:47.369 Test: blockdev write zeroes read no split ...passed 00:11:47.369 Test: blockdev write zeroes read split ...passed 00:11:47.369 Test: blockdev write zeroes read split partial ...passed 00:11:47.369 Test: blockdev reset ...passed 00:11:47.369 Test: blockdev write read 8 blocks ...passed 00:11:47.369 Test: blockdev write read size > 128k ...passed 00:11:47.369 Test: blockdev write read invalid size ...passed 00:11:47.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:47.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:47.369 Test: blockdev write read max offset ...passed 00:11:47.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:47.369 Test: blockdev writev readv 8 blocks ...passed 00:11:47.369 Test: blockdev writev readv 30 x 1block ...passed 00:11:47.369 Test: blockdev writev readv block ...passed 00:11:47.369 Test: blockdev writev readv size > 128k ...passed 00:11:47.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:47.369 Test: blockdev comparev and writev ...passed 00:11:47.369 Test: blockdev nvme passthru rw ...passed 00:11:47.369 Test: blockdev nvme passthru vendor specific ...passed 00:11:47.369 Test: blockdev nvme admin passthru ...passed 00:11:47.369 Test: blockdev copy ...passed 00:11:47.369 00:11:47.369 Run Summary: Type Total Ran Passed Failed Inactive 00:11:47.369 suites 6 6 n/a 0 0 00:11:47.369 tests 138 138 138 0 0 00:11:47.369 asserts 780 780 780 0 n/a 00:11:47.369 00:11:47.369 Elapsed time = 0.408 seconds 00:11:47.369 0 00:11:47.369 00:06:01 -- bdev/blockdev.sh@293 -- # killprocess 79284 00:11:47.369 00:06:01 -- common/autotest_common.sh@936 -- # '[' -z 79284 ']' 00:11:47.369 00:06:01 -- common/autotest_common.sh@940 -- # kill -0 79284 00:11:47.369 00:06:01 -- common/autotest_common.sh@941 -- # uname 00:11:47.369 00:06:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.369 00:06:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79284 00:11:47.369 00:06:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:47.369 00:06:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:47.369 00:06:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79284' 00:11:47.369 killing process with pid 79284 00:11:47.369 00:06:01 -- common/autotest_common.sh@955 -- # kill 79284 00:11:47.369 00:06:01 -- common/autotest_common.sh@960 -- # wait 79284 00:11:47.628 00:06:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:47.628 00:11:47.628 real 0m1.293s 00:11:47.628 user 0m3.177s 00:11:47.628 sys 0m0.248s 00:11:47.628 00:06:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:47.628 00:06:02 -- common/autotest_common.sh@10 -- # set +x 00:11:47.628 ************************************ 00:11:47.628 END TEST bdev_bounds 00:11:47.628 ************************************ 00:11:47.628 00:06:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:47.628 00:06:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:47.628 00:06:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.628 00:06:02 -- common/autotest_common.sh@10 -- # set +x 00:11:47.628 ************************************ 00:11:47.628 START TEST bdev_nbd 00:11:47.628 ************************************ 00:11:47.628 00:06:02 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:47.628 00:06:02 -- bdev/blockdev.sh@298 -- # uname -s 00:11:47.628 00:06:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:47.628 00:06:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.628 00:06:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:47.628 00:06:02 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:47.628 00:06:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:47.628 00:06:02 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:47.628 00:06:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:47.628 00:06:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:47.628 00:06:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:47.628 00:06:02 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:47.628 00:06:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:47.628 00:06:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:47.628 00:06:02 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:47.628 00:06:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:47.628 00:06:02 -- bdev/blockdev.sh@316 -- # nbd_pid=79334 00:11:47.628 00:06:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:47.628 00:06:02 -- bdev/blockdev.sh@318 -- # waitforlisten 79334 /var/tmp/spdk-nbd.sock 00:11:47.628 00:06:02 -- common/autotest_common.sh@829 -- # '[' -z 79334 ']' 00:11:47.628 00:06:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:47.628 00:06:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.628 00:06:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:47.628 00:06:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:47.628 00:06:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.628 00:06:02 -- common/autotest_common.sh@10 -- # set +x 00:11:47.886 [2024-11-28 00:06:02.229859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:47.886 [2024-11-28 00:06:02.230076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.886 [2024-11-28 00:06:02.376574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.886 [2024-11-28 00:06:02.403357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.820 00:06:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.820 00:06:03 -- common/autotest_common.sh@862 -- # return 0 00:11:48.820 00:06:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:48.820 00:06:03 -- bdev/nbd_common.sh@24 -- # local i 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:48.821 00:06:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:48.821 00:06:03 -- common/autotest_common.sh@867 -- # local i 00:11:48.821 00:06:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.821 00:06:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.821 00:06:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:48.821 00:06:03 -- common/autotest_common.sh@871 -- # break 00:11:48.821 00:06:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.821 00:06:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.821 00:06:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.821 1+0 records in 00:11:48.821 1+0 records out 00:11:48.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674856 s, 6.1 MB/s 00:11:48.821 00:06:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.821 00:06:03 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.821 00:06:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.821 00:06:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.821 00:06:03 -- common/autotest_common.sh@887 -- # return 0 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:48.821 00:06:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:49.078 00:06:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:49.079 00:06:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:49.079 00:06:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:49.079 00:06:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:49.079 00:06:03 -- common/autotest_common.sh@867 -- # local i 00:11:49.079 00:06:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.079 00:06:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.079 00:06:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:49.079 00:06:03 -- common/autotest_common.sh@871 -- # break 00:11:49.079 00:06:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.079 00:06:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.079 00:06:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.079 1+0 records in 00:11:49.079 1+0 records out 00:11:49.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449207 s, 9.1 MB/s 00:11:49.079 00:06:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.079 00:06:03 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.079 00:06:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.079 00:06:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.079 00:06:03 -- common/autotest_common.sh@887 -- # return 0 00:11:49.079 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.079 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:49.079 00:06:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:49.337 00:06:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:49.337 00:06:03 -- common/autotest_common.sh@867 -- # local i 00:11:49.337 00:06:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.337 00:06:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.337 00:06:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:49.337 00:06:03 -- common/autotest_common.sh@871 -- # break 00:11:49.337 00:06:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.337 00:06:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.337 00:06:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.337 1+0 records in 00:11:49.337 1+0 records out 00:11:49.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101015 s, 4.1 MB/s 00:11:49.337 00:06:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.337 00:06:03 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.337 00:06:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.337 00:06:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.337 00:06:03 -- common/autotest_common.sh@887 -- # return 0 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:49.337 00:06:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:49.596 00:06:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:49.596 00:06:04 -- common/autotest_common.sh@867 -- # local i 00:11:49.596 00:06:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.596 00:06:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.596 00:06:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:49.596 00:06:04 -- common/autotest_common.sh@871 -- # break 00:11:49.596 00:06:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.596 00:06:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.596 00:06:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.596 1+0 records in 00:11:49.596 1+0 records out 00:11:49.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061608 s, 6.6 MB/s 00:11:49.596 00:06:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.596 00:06:04 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.596 00:06:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.596 00:06:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.596 00:06:04 -- common/autotest_common.sh@887 -- # return 0 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:49.596 00:06:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:49.859 00:06:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:49.859 00:06:04 -- common/autotest_common.sh@867 -- # local i 00:11:49.859 00:06:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:49.859 00:06:04 -- common/autotest_common.sh@871 -- # break 00:11:49.859 00:06:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.859 1+0 records in 00:11:49.859 1+0 records out 00:11:49.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996959 s, 4.1 MB/s 00:11:49.859 00:06:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.859 00:06:04 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.859 00:06:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.859 00:06:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.859 00:06:04 -- common/autotest_common.sh@887 -- # return 0 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:49.859 00:06:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:49.859 00:06:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:49.859 00:06:04 -- common/autotest_common.sh@867 -- # local i 00:11:49.859 00:06:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:49.859 00:06:04 -- common/autotest_common.sh@871 -- # break 00:11:49.859 00:06:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.859 00:06:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.126 1+0 records in 00:11:50.126 1+0 records out 00:11:50.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932142 s, 4.4 MB/s 00:11:50.126 00:06:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.126 00:06:04 -- common/autotest_common.sh@884 -- # size=4096 00:11:50.126 00:06:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.126 00:06:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:50.126 00:06:04 -- common/autotest_common.sh@887 -- # return 0 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd0", 00:11:50.126 "bdev_name": "nvme0n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd1", 00:11:50.126 "bdev_name": "nvme1n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd2", 00:11:50.126 "bdev_name": "nvme1n2" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd3", 00:11:50.126 "bdev_name": "nvme1n3" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd4", 00:11:50.126 "bdev_name": "nvme2n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd5", 00:11:50.126 "bdev_name": "nvme3n1" 00:11:50.126 } 00:11:50.126 ]' 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:50.126 00:06:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd0", 00:11:50.126 "bdev_name": "nvme0n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd1", 00:11:50.126 "bdev_name": "nvme1n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd2", 00:11:50.126 "bdev_name": "nvme1n2" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd3", 00:11:50.126 "bdev_name": "nvme1n3" 00:11:50.126 }, 00:11:50.126 { 00:11:50.126 "nbd_device": "/dev/nbd4", 00:11:50.126 "bdev_name": "nvme2n1" 00:11:50.126 }, 00:11:50.126 { 00:11:50.127 "nbd_device": "/dev/nbd5", 00:11:50.127 "bdev_name": "nvme3n1" 00:11:50.127 } 00:11:50.127 ]' 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@51 -- # local i 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.127 00:06:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@41 -- # break 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.385 00:06:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@41 -- # break 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.644 00:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@41 -- # break 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@41 -- # break 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.902 00:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@41 -- # break 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.160 00:06:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@41 -- # break 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.419 00:06:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:51.677 00:06:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@65 -- # true 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@65 -- # count=0 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@122 -- # count=0 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@127 -- # return 0 00:11:51.678 00:06:06 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@12 -- # local i 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.678 00:06:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:11:51.937 /dev/nbd0 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:51.937 00:06:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:51.937 00:06:06 -- common/autotest_common.sh@867 -- # local i 00:11:51.937 00:06:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:51.937 00:06:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:51.937 00:06:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:51.937 00:06:06 -- common/autotest_common.sh@871 -- # break 00:11:51.937 00:06:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:51.937 00:06:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:51.937 00:06:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.937 1+0 records in 00:11:51.937 1+0 records out 00:11:51.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995074 s, 4.1 MB/s 00:11:51.937 00:06:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.937 00:06:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:51.937 00:06:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.937 00:06:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:51.937 00:06:06 -- common/autotest_common.sh@887 -- # return 0 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:11:51.937 /dev/nbd1 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.937 00:06:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.937 00:06:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:51.937 00:06:06 -- common/autotest_common.sh@867 -- # local i 00:11:51.937 00:06:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:51.937 00:06:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:52.196 00:06:06 -- common/autotest_common.sh@871 -- # break 00:11:52.196 00:06:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.196 1+0 records in 00:11:52.196 1+0 records out 00:11:52.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447372 s, 9.2 MB/s 00:11:52.196 00:06:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.196 00:06:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.196 00:06:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.196 00:06:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.196 00:06:06 -- common/autotest_common.sh@887 -- # return 0 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:11:52.196 /dev/nbd10 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:52.196 00:06:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:52.196 00:06:06 -- common/autotest_common.sh@867 -- # local i 00:11:52.196 00:06:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:52.196 00:06:06 -- common/autotest_common.sh@871 -- # break 00:11:52.196 00:06:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.196 00:06:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.196 1+0 records in 00:11:52.196 1+0 records out 00:11:52.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804161 s, 5.1 MB/s 00:11:52.196 00:06:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.196 00:06:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.196 00:06:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.196 00:06:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.196 00:06:06 -- common/autotest_common.sh@887 -- # return 0 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.196 00:06:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:11:52.456 /dev/nbd11 00:11:52.456 00:06:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:52.456 00:06:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:52.456 00:06:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:52.456 00:06:06 -- common/autotest_common.sh@867 -- # local i 00:11:52.456 00:06:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.456 00:06:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.456 00:06:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:52.456 00:06:06 -- common/autotest_common.sh@871 -- # break 00:11:52.456 00:06:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.456 00:06:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.456 00:06:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.456 1+0 records in 00:11:52.456 1+0 records out 00:11:52.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998224 s, 4.1 MB/s 00:11:52.456 00:06:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.456 00:06:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.456 00:06:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.456 00:06:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.456 00:06:06 -- common/autotest_common.sh@887 -- # return 0 00:11:52.456 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.456 00:06:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.456 00:06:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:11:52.715 /dev/nbd12 00:11:52.715 00:06:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:52.715 00:06:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:52.715 00:06:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:52.715 00:06:07 -- common/autotest_common.sh@867 -- # local i 00:11:52.715 00:06:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.715 00:06:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.715 00:06:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:52.715 00:06:07 -- common/autotest_common.sh@871 -- # break 00:11:52.715 00:06:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.715 00:06:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.715 00:06:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.715 1+0 records in 00:11:52.715 1+0 records out 00:11:52.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904831 s, 4.5 MB/s 00:11:52.715 00:06:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.715 00:06:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.715 00:06:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.715 00:06:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.715 00:06:07 -- common/autotest_common.sh@887 -- # return 0 00:11:52.715 00:06:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.715 00:06:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.715 00:06:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:11:52.973 /dev/nbd13 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:52.973 00:06:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:52.973 00:06:07 -- common/autotest_common.sh@867 -- # local i 00:11:52.973 00:06:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:52.973 00:06:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:52.973 00:06:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:52.973 00:06:07 -- common/autotest_common.sh@871 -- # break 00:11:52.973 00:06:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:52.973 00:06:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:52.973 00:06:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.973 1+0 records in 00:11:52.973 1+0 records out 00:11:52.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830012 s, 4.9 MB/s 00:11:52.973 00:06:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.973 00:06:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:52.973 00:06:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.973 00:06:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:52.973 00:06:07 -- common/autotest_common.sh@887 -- # return 0 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.973 00:06:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd0", 00:11:53.232 "bdev_name": "nvme0n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd1", 00:11:53.232 "bdev_name": "nvme1n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd10", 00:11:53.232 "bdev_name": "nvme1n2" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd11", 00:11:53.232 "bdev_name": "nvme1n3" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd12", 00:11:53.232 "bdev_name": "nvme2n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd13", 00:11:53.232 "bdev_name": "nvme3n1" 00:11:53.232 } 00:11:53.232 ]' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd0", 00:11:53.232 "bdev_name": "nvme0n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd1", 00:11:53.232 "bdev_name": "nvme1n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd10", 00:11:53.232 "bdev_name": "nvme1n2" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd11", 00:11:53.232 "bdev_name": "nvme1n3" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd12", 00:11:53.232 "bdev_name": "nvme2n1" 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "nbd_device": "/dev/nbd13", 00:11:53.232 "bdev_name": "nvme3n1" 00:11:53.232 } 00:11:53.232 ]' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:53.232 /dev/nbd1 00:11:53.232 /dev/nbd10 00:11:53.232 /dev/nbd11 00:11:53.232 /dev/nbd12 00:11:53.232 /dev/nbd13' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:53.232 /dev/nbd1 00:11:53.232 /dev/nbd10 00:11:53.232 /dev/nbd11 00:11:53.232 /dev/nbd12 00:11:53.232 /dev/nbd13' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@65 -- # count=6 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@66 -- # echo 6 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@95 -- # count=6 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:53.232 256+0 records in 00:11:53.232 256+0 records out 00:11:53.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712915 s, 147 MB/s 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:53.232 256+0 records in 00:11:53.232 256+0 records out 00:11:53.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148386 s, 7.1 MB/s 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.232 00:06:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:53.491 256+0 records in 00:11:53.491 256+0 records out 00:11:53.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188034 s, 5.6 MB/s 00:11:53.491 00:06:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.491 00:06:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:53.750 256+0 records in 00:11:53.750 256+0 records out 00:11:53.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137936 s, 7.6 MB/s 00:11:53.750 00:06:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.750 00:06:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:53.750 256+0 records in 00:11:53.750 256+0 records out 00:11:53.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129026 s, 8.1 MB/s 00:11:53.750 00:06:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.750 00:06:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:54.008 256+0 records in 00:11:54.008 256+0 records out 00:11:54.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196872 s, 5.3 MB/s 00:11:54.008 00:06:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:54.008 00:06:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:54.267 256+0 records in 00:11:54.267 256+0 records out 00:11:54.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149703 s, 7.0 MB/s 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@51 -- # local i 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:54.267 00:06:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@41 -- # break 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.268 00:06:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@41 -- # break 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.526 00:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@41 -- # break 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.785 00:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@41 -- # break 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@41 -- # break 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.044 00:06:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@41 -- # break 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.303 00:06:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@65 -- # true 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@65 -- # count=0 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@104 -- # count=0 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@109 -- # return 0 00:11:55.562 00:06:10 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:55.562 00:06:10 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:55.820 malloc_lvol_verify 00:11:55.820 00:06:10 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:55.820 05abeea5-f0a6-4b7f-aa43-2ae9a511c932 00:11:55.820 00:06:10 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:56.079 48778521-22ef-48a3-b0c1-cef3a4bb07f9 00:11:56.079 00:06:10 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:56.338 /dev/nbd0 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:56.338 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.338 Discarding device blocks: 0/4096 done 00:11:56.338 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:56.338 00:11:56.338 Allocating group tables: 0/1 done 00:11:56.338 Writing inode tables: 0/1 done 00:11:56.338 Creating journal (1024 blocks): done 00:11:56.338 Writing superblocks and filesystem accounting information: 0/1 done 00:11:56.338 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@51 -- # local i 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.338 00:06:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@41 -- # break 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:56.597 00:06:10 -- bdev/nbd_common.sh@147 -- # return 0 00:11:56.597 00:06:10 -- bdev/blockdev.sh@324 -- # killprocess 79334 00:11:56.597 00:06:10 -- common/autotest_common.sh@936 -- # '[' -z 79334 ']' 00:11:56.597 00:06:10 -- common/autotest_common.sh@940 -- # kill -0 79334 00:11:56.597 00:06:10 -- common/autotest_common.sh@941 -- # uname 00:11:56.597 00:06:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.597 00:06:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79334 00:11:56.597 00:06:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:56.597 00:06:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:56.597 killing process with pid 79334 00:11:56.597 00:06:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79334' 00:11:56.597 00:06:10 -- common/autotest_common.sh@955 -- # kill 79334 00:11:56.597 00:06:10 -- common/autotest_common.sh@960 -- # wait 79334 00:11:56.597 00:06:11 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:56.597 00:11:56.597 real 0m8.969s 00:11:56.597 user 0m12.565s 00:11:56.597 sys 0m3.151s 00:11:56.597 ************************************ 00:11:56.597 END TEST bdev_nbd 00:11:56.597 ************************************ 00:11:56.597 00:06:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.597 00:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:56.597 00:06:11 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:56.597 00:06:11 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:11:56.597 00:06:11 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:11:56.597 00:06:11 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:56.597 00:06:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:56.597 00:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.597 00:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 ************************************ 00:11:56.862 START TEST bdev_fio 00:11:56.862 ************************************ 00:11:56.862 00:06:11 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@329 -- # local env_context 00:11:56.862 00:06:11 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:56.862 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:56.862 00:06:11 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:56.862 00:06:11 -- bdev/blockdev.sh@337 -- # echo '' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:56.862 00:06:11 -- bdev/blockdev.sh@337 -- # env_context= 00:11:56.862 00:06:11 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:56.862 00:06:11 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:56.862 00:06:11 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:56.862 00:06:11 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:56.862 00:06:11 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:56.862 00:06:11 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:56.862 00:06:11 -- common/autotest_common.sh@1290 -- # cat 00:11:56.862 00:06:11 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1303 -- # cat 00:11:56.862 00:06:11 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:56.862 00:06:11 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:56.862 00:06:11 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:11:56.862 00:06:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:56.862 00:06:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:11:56.862 00:06:11 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:56.862 00:06:11 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:56.862 00:06:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.862 00:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 ************************************ 00:11:56.862 START TEST bdev_fio_rw_verify 00:11:56.862 ************************************ 00:11:56.862 00:06:11 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:56.862 00:06:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:56.862 00:06:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:56.862 00:06:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:56.862 00:06:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:56.862 00:06:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:56.862 00:06:11 -- common/autotest_common.sh@1330 -- # shift 00:11:56.862 00:06:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:56.862 00:06:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:56.862 00:06:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:56.862 00:06:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:56.862 00:06:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:56.862 00:06:11 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:56.862 00:06:11 -- common/autotest_common.sh@1336 -- # break 00:11:56.862 00:06:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:56.862 00:06:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:56.862 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.862 fio-3.35 00:11:56.862 Starting 6 threads 00:12:09.059 00:12:09.059 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=79714: Thu Nov 28 00:06:21 2024 00:12:09.059 read: IOPS=24.5k, BW=95.6MiB/s (100MB/s)(956MiB/10001msec) 00:12:09.059 slat (usec): min=2, max=1828, avg= 5.08, stdev=11.93 00:12:09.059 clat (usec): min=65, max=6339, avg=768.00, stdev=639.89 00:12:09.059 lat (usec): min=69, max=6349, avg=773.07, stdev=640.35 00:12:09.059 clat percentiles (usec): 00:12:09.059 | 50.000th=[ 537], 99.000th=[ 2999], 99.900th=[ 4228], 99.990th=[ 5342], 00:12:09.059 | 99.999th=[ 5669] 00:12:09.059 write: IOPS=24.7k, BW=96.6MiB/s (101MB/s)(967MiB/10001msec); 0 zone resets 00:12:09.059 slat (usec): min=9, max=5243, avg=31.95, stdev=111.64 00:12:09.059 clat (usec): min=62, max=7276, avg=909.43, stdev=711.56 00:12:09.059 lat (usec): min=81, max=7352, avg=941.38, stdev=726.79 00:12:09.059 clat percentiles (usec): 00:12:09.059 | 50.000th=[ 652], 99.000th=[ 3359], 99.900th=[ 4752], 99.990th=[ 5997], 00:12:09.059 | 99.999th=[ 6652] 00:12:09.059 bw ( KiB/s): min=59536, max=164911, per=100.00%, avg=100496.42, stdev=5372.78, samples=114 00:12:09.059 iops : min=14884, max=41227, avg=25123.58, stdev=1343.13, samples=114 00:12:09.059 lat (usec) : 100=0.03%, 250=11.57%, 500=29.13%, 750=19.25%, 1000=10.88% 00:12:09.059 lat (msec) : 2=22.01%, 4=6.88%, 10=0.25% 00:12:09.059 cpu : usr=43.50%, sys=32.92%, ctx=7796, majf=0, minf=23373 00:12:09.059 IO depths : 1=11.8%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.059 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.059 issued rwts: total=244687,247432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:09.059 00:12:09.059 Run status group 0 (all jobs): 00:12:09.059 READ: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=956MiB (1002MB), run=10001-10001msec 00:12:09.059 WRITE: bw=96.6MiB/s (101MB/s), 96.6MiB/s-96.6MiB/s (101MB/s-101MB/s), io=967MiB (1013MB), run=10001-10001msec 00:12:09.059 ----------------------------------------------------- 00:12:09.059 Suppressions used: 00:12:09.059 count bytes template 00:12:09.059 6 48 /usr/src/fio/parse.c 00:12:09.059 2583 247968 /usr/src/fio/iolog.c 00:12:09.059 1 8 libtcmalloc_minimal.so 00:12:09.059 1 904 libcrypto.so 00:12:09.059 ----------------------------------------------------- 00:12:09.059 00:12:09.059 00:12:09.059 real 0m10.991s 00:12:09.059 user 0m26.788s 00:12:09.059 sys 0m20.054s 00:12:09.059 00:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.059 ************************************ 00:12:09.059 END TEST bdev_fio_rw_verify 00:12:09.059 ************************************ 00:12:09.059 00:06:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.059 00:06:22 -- bdev/blockdev.sh@348 -- # rm -f 00:12:09.059 00:06:22 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.059 00:06:22 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.059 00:06:22 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:09.059 00:06:22 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:09.059 00:06:22 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:09.059 00:06:22 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:09.059 00:06:22 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.059 00:06:22 -- common/autotest_common.sh@1290 -- # cat 00:12:09.059 00:06:22 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:09.059 00:06:22 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:09.060 00:06:22 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2d2cf691-b951-4e64-9ec1-a904dcc49e92"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2d2cf691-b951-4e64-9ec1-a904dcc49e92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a75a4768-c76e-4a09-b3ca-b79fdde77f8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a75a4768-c76e-4a09-b3ca-b79fdde77f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "81fdae00-9274-4724-be75-457320cfe1af"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81fdae00-9274-4724-be75-457320cfe1af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "8321732e-0b98-47a8-9a6a-8f1cd7d0d2eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8321732e-0b98-47a8-9a6a-8f1cd7d0d2eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c18c3049-651a-433c-88a0-1ed37ed645de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c18c3049-651a-433c-88a0-1ed37ed645de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2a6e8439-a710-45b2-8481-e751175ea4ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2a6e8439-a710-45b2-8481-e751175ea4ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:09.060 00:06:22 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:09.060 00:06:22 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:09.060 00:06:22 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.060 /home/vagrant/spdk_repo/spdk 00:12:09.060 00:06:22 -- bdev/blockdev.sh@360 -- # popd 00:12:09.060 00:06:22 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:09.060 00:06:22 -- bdev/blockdev.sh@362 -- # return 0 00:12:09.060 00:12:09.060 real 0m11.146s 00:12:09.060 user 0m26.863s 00:12:09.060 sys 0m20.119s 00:12:09.060 ************************************ 00:12:09.060 END TEST bdev_fio 00:12:09.060 ************************************ 00:12:09.060 00:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.060 00:06:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 00:06:22 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:09.060 00:06:22 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.060 00:06:22 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:09.060 00:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.060 00:06:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.060 ************************************ 00:12:09.060 START TEST bdev_verify 00:12:09.060 ************************************ 00:12:09.060 00:06:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.060 [2024-11-28 00:06:22.467976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:09.060 [2024-11-28 00:06:22.468085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79885 ] 00:12:09.060 [2024-11-28 00:06:22.609787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.060 [2024-11-28 00:06:22.640921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.060 [2024-11-28 00:06:22.641049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.060 Running I/O for 5 seconds... 00:12:14.324 00:12:14.324 Latency(us) 00:12:14.324 [2024-11-28T00:06:28.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0x20000 00:12:14.324 nvme0n1 : 5.06 2789.09 10.89 0.00 0.00 45704.25 15022.87 62511.26 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x20000 length 0x20000 00:12:14.324 nvme0n1 : 5.07 2567.57 10.03 0.00 0.00 49664.29 12098.95 70173.93 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0x80000 00:12:14.324 nvme1n1 : 5.06 2356.46 9.20 0.00 0.00 54126.64 3755.72 67350.84 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x80000 length 0x80000 00:12:14.324 nvme1n1 : 5.07 2477.89 9.68 0.00 0.00 51407.21 7360.20 64124.46 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0x80000 00:12:14.324 nvme1n2 : 5.07 2387.98 9.33 0.00 0.00 53273.29 13510.50 66947.54 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x80000 length 0x80000 00:12:14.324 nvme1n2 : 5.08 2456.08 9.59 0.00 0.00 51736.36 15627.82 77433.30 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0x80000 00:12:14.324 nvme1n3 : 5.07 2396.23 9.36 0.00 0.00 53022.87 6351.95 68157.44 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x80000 length 0x80000 00:12:14.324 nvme1n3 : 5.08 2462.70 9.62 0.00 0.00 51548.92 13308.85 66947.54 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0xbd0bd 00:12:14.324 nvme2n1 : 5.07 2613.27 10.21 0.00 0.00 48542.46 7813.91 67754.14 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:14.324 nvme2n1 : 5.08 2603.71 10.17 0.00 0.00 48745.83 11695.66 77030.01 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0x0 length 0xa0000 00:12:14.324 nvme3n1 : 5.08 2558.39 9.99 0.00 0.00 49504.08 9527.93 64124.46 00:12:14.324 [2024-11-28T00:06:28.926Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:14.324 Verification LBA range: start 0xa0000 length 0xa0000 00:12:14.324 nvme3n1 : 5.07 2603.79 10.17 0.00 0.00 48680.73 2495.41 74610.22 00:12:14.324 [2024-11-28T00:06:28.926Z] =================================================================================================================== 00:12:14.324 [2024-11-28T00:06:28.927Z] Total : 30273.17 118.25 0.00 0.00 50386.84 2495.41 77433.30 00:12:14.325 00:12:14.325 real 0m5.666s 00:12:14.325 user 0m7.252s 00:12:14.325 sys 0m2.918s 00:12:14.325 00:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:14.325 ************************************ 00:12:14.325 END TEST bdev_verify 00:12:14.325 ************************************ 00:12:14.325 00:06:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 00:06:28 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:14.325 00:06:28 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:14.325 00:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.325 00:06:28 -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 ************************************ 00:12:14.325 START TEST bdev_verify_big_io 00:12:14.325 ************************************ 00:12:14.325 00:06:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:14.325 [2024-11-28 00:06:28.174203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:14.325 [2024-11-28 00:06:28.174323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79978 ] 00:12:14.325 [2024-11-28 00:06:28.322133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:14.325 [2024-11-28 00:06:28.351954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.325 [2024-11-28 00:06:28.352024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.325 Running I/O for 5 seconds... 00:12:19.594 00:12:19.594 Latency(us) 00:12:19.594 [2024-11-28T00:06:34.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0x2000 00:12:19.594 nvme0n1 : 5.49 285.17 17.82 0.00 0.00 430487.39 61301.37 512995.64 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x2000 length 0x2000 00:12:19.594 nvme0n1 : 5.49 253.56 15.85 0.00 0.00 486068.08 59284.87 525901.19 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0x8000 00:12:19.594 nvme1n1 : 5.50 235.93 14.75 0.00 0.00 505741.87 177451.32 493637.32 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x8000 length 0x8000 00:12:19.594 nvme1n1 : 5.50 236.02 14.75 0.00 0.00 526809.47 52428.80 561391.46 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0x8000 00:12:19.594 nvme1n2 : 5.53 205.73 12.86 0.00 0.00 589223.49 26214.40 629145.60 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x8000 length 0x8000 00:12:19.594 nvme1n2 : 5.49 302.07 18.88 0.00 0.00 404058.02 48395.82 471052.60 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0x8000 00:12:19.594 nvme1n3 : 5.53 235.34 14.71 0.00 0.00 506978.33 26214.40 548485.91 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x8000 length 0x8000 00:12:19.594 nvme1n3 : 5.50 236.63 14.79 0.00 0.00 509632.70 52428.80 512995.64 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0xbd0b 00:12:19.594 nvme2n1 : 5.53 267.29 16.71 0.00 0.00 439888.62 21778.12 516222.03 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:19.594 nvme2n1 : 5.51 342.94 21.43 0.00 0.00 348350.02 10889.06 635598.38 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0x0 length 0xa000 00:12:19.594 nvme3n1 : 5.53 267.07 16.69 0.00 0.00 434303.00 2230.74 667862.25 00:12:19.594 [2024-11-28T00:06:34.196Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.594 Verification LBA range: start 0xa000 length 0xa000 00:12:19.594 nvme3n1 : 5.51 284.47 17.78 0.00 0.00 409210.15 7309.78 474278.99 00:12:19.594 [2024-11-28T00:06:34.196Z] =================================================================================================================== 00:12:19.594 [2024-11-28T00:06:34.196Z] Total : 3152.23 197.01 0.00 0.00 457566.08 2230.74 667862.25 00:12:19.853 00:12:19.853 real 0m6.160s 00:12:19.853 user 0m11.262s 00:12:19.853 sys 0m0.445s 00:12:19.853 00:06:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:19.853 ************************************ 00:12:19.853 END TEST bdev_verify_big_io 00:12:19.853 00:06:34 -- common/autotest_common.sh@10 -- # set +x 00:12:19.853 ************************************ 00:12:19.853 00:06:34 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.853 00:06:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:19.853 00:06:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.853 00:06:34 -- common/autotest_common.sh@10 -- # set +x 00:12:19.853 ************************************ 00:12:19.853 START TEST bdev_write_zeroes 00:12:19.853 ************************************ 00:12:19.853 00:06:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.853 [2024-11-28 00:06:34.398439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.853 [2024-11-28 00:06:34.398538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80070 ] 00:12:20.112 [2024-11-28 00:06:34.541206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.112 [2024-11-28 00:06:34.571446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.370 Running I/O for 1 seconds... 00:12:21.378 00:12:21.378 Latency(us) 00:12:21.378 [2024-11-28T00:06:35.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme0n1 : 1.00 14668.95 57.30 0.00 0.00 8717.70 5747.00 21374.82 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme1n1 : 1.00 14651.39 57.23 0.00 0.00 8719.40 5620.97 19459.15 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme1n2 : 1.01 14684.08 57.36 0.00 0.00 8694.93 4839.58 17946.78 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme1n3 : 1.01 14600.99 57.04 0.00 0.00 8736.15 5721.80 19862.45 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme2n1 : 1.01 15884.73 62.05 0.00 0.00 8015.47 4184.22 17845.96 00:12:21.378 [2024-11-28T00:06:35.980Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.378 nvme3n1 : 1.01 14636.59 57.17 0.00 0.00 8658.70 4184.22 24601.21 00:12:21.378 [2024-11-28T00:06:35.980Z] =================================================================================================================== 00:12:21.378 [2024-11-28T00:06:35.980Z] Total : 89126.73 348.15 0.00 0.00 8581.84 4184.22 24601.21 00:12:21.378 00:12:21.378 real 0m1.583s 00:12:21.378 user 0m0.997s 00:12:21.378 sys 0m0.413s 00:12:21.378 00:06:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.378 ************************************ 00:12:21.378 END TEST bdev_write_zeroes 00:12:21.378 ************************************ 00:12:21.378 00:06:35 -- common/autotest_common.sh@10 -- # set +x 00:12:21.378 00:06:35 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.378 00:06:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:21.378 00:06:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.378 00:06:35 -- common/autotest_common.sh@10 -- # set +x 00:12:21.636 ************************************ 00:12:21.636 START TEST bdev_json_nonenclosed 00:12:21.636 ************************************ 00:12:21.636 00:06:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.636 [2024-11-28 00:06:36.040204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.636 [2024-11-28 00:06:36.040315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80108 ] 00:12:21.636 [2024-11-28 00:06:36.187648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.636 [2024-11-28 00:06:36.218015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.636 [2024-11-28 00:06:36.218163] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:21.636 [2024-11-28 00:06:36.218181] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:21.895 00:12:21.895 real 0m0.309s 00:12:21.895 user 0m0.113s 00:12:21.895 sys 0m0.093s 00:12:21.895 00:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.895 ************************************ 00:12:21.895 END TEST bdev_json_nonenclosed 00:12:21.895 ************************************ 00:12:21.895 00:06:36 -- common/autotest_common.sh@10 -- # set +x 00:12:21.895 00:06:36 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.895 00:06:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:21.895 00:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.895 00:06:36 -- common/autotest_common.sh@10 -- # set +x 00:12:21.895 ************************************ 00:12:21.895 START TEST bdev_json_nonarray 00:12:21.895 ************************************ 00:12:21.895 00:06:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.895 [2024-11-28 00:06:36.409284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.895 [2024-11-28 00:06:36.409409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80129 ] 00:12:22.154 [2024-11-28 00:06:36.557130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.154 [2024-11-28 00:06:36.587913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.154 [2024-11-28 00:06:36.588067] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:22.154 [2024-11-28 00:06:36.588086] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.154 00:12:22.154 real 0m0.311s 00:12:22.154 user 0m0.116s 00:12:22.154 sys 0m0.091s 00:12:22.154 00:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.154 00:06:36 -- common/autotest_common.sh@10 -- # set +x 00:12:22.154 ************************************ 00:12:22.154 END TEST bdev_json_nonarray 00:12:22.154 ************************************ 00:12:22.154 00:06:36 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:22.154 00:06:36 -- bdev/blockdev.sh@809 -- # cleanup 00:12:22.154 00:06:36 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:22.154 00:06:36 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:22.154 00:06:36 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:22.154 00:06:36 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:23.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.295 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.295 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.295 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.295 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.295 00:12:35.295 real 0m56.413s 00:12:35.295 user 1m10.692s 00:12:35.295 sys 0m43.756s 00:12:35.295 00:06:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.295 00:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.295 ************************************ 00:12:35.295 END TEST blockdev_xnvme 00:12:35.295 ************************************ 00:12:35.295 00:06:49 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:35.295 00:06:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.295 00:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.295 ************************************ 00:12:35.295 START TEST ublk 00:12:35.295 ************************************ 00:12:35.295 00:06:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:35.295 * Looking for test storage... 00:12:35.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:35.295 00:06:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:35.295 00:06:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:35.295 00:06:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:35.295 00:06:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:35.295 00:06:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:35.295 00:06:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:35.295 00:06:49 -- scripts/common.sh@335 -- # IFS=.-: 00:12:35.295 00:06:49 -- scripts/common.sh@335 -- # read -ra ver1 00:12:35.295 00:06:49 -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.295 00:06:49 -- scripts/common.sh@336 -- # read -ra ver2 00:12:35.295 00:06:49 -- scripts/common.sh@337 -- # local 'op=<' 00:12:35.295 00:06:49 -- scripts/common.sh@339 -- # ver1_l=2 00:12:35.295 00:06:49 -- scripts/common.sh@340 -- # ver2_l=1 00:12:35.295 00:06:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:35.295 00:06:49 -- scripts/common.sh@343 -- # case "$op" in 00:12:35.295 00:06:49 -- scripts/common.sh@344 -- # : 1 00:12:35.295 00:06:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:35.295 00:06:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.295 00:06:49 -- scripts/common.sh@364 -- # decimal 1 00:12:35.295 00:06:49 -- scripts/common.sh@352 -- # local d=1 00:12:35.295 00:06:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.295 00:06:49 -- scripts/common.sh@354 -- # echo 1 00:12:35.295 00:06:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:35.295 00:06:49 -- scripts/common.sh@365 -- # decimal 2 00:12:35.295 00:06:49 -- scripts/common.sh@352 -- # local d=2 00:12:35.295 00:06:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.295 00:06:49 -- scripts/common.sh@354 -- # echo 2 00:12:35.295 00:06:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:35.295 00:06:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:35.295 00:06:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:35.295 00:06:49 -- scripts/common.sh@367 -- # return 0 00:12:35.295 00:06:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:35.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.295 --rc genhtml_branch_coverage=1 00:12:35.295 --rc genhtml_function_coverage=1 00:12:35.295 --rc genhtml_legend=1 00:12:35.295 --rc geninfo_all_blocks=1 00:12:35.295 --rc geninfo_unexecuted_blocks=1 00:12:35.295 00:12:35.295 ' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:35.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.295 --rc genhtml_branch_coverage=1 00:12:35.295 --rc genhtml_function_coverage=1 00:12:35.295 --rc genhtml_legend=1 00:12:35.295 --rc geninfo_all_blocks=1 00:12:35.295 --rc geninfo_unexecuted_blocks=1 00:12:35.295 00:12:35.295 ' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:35.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.295 --rc genhtml_branch_coverage=1 00:12:35.295 --rc genhtml_function_coverage=1 00:12:35.295 --rc genhtml_legend=1 00:12:35.295 --rc geninfo_all_blocks=1 00:12:35.295 --rc geninfo_unexecuted_blocks=1 00:12:35.295 00:12:35.295 ' 00:12:35.295 00:06:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:35.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.295 --rc genhtml_branch_coverage=1 00:12:35.295 --rc genhtml_function_coverage=1 00:12:35.295 --rc genhtml_legend=1 00:12:35.295 --rc geninfo_all_blocks=1 00:12:35.295 --rc geninfo_unexecuted_blocks=1 00:12:35.295 00:12:35.295 ' 00:12:35.295 00:06:49 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:35.295 00:06:49 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:35.295 00:06:49 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:35.295 00:06:49 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:35.295 00:06:49 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:35.295 00:06:49 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:35.295 00:06:49 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:35.295 00:06:49 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:35.295 00:06:49 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:35.295 00:06:49 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:35.295 00:06:49 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:35.295 00:06:49 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:35.295 00:06:49 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:35.295 00:06:49 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:35.296 00:06:49 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:35.296 00:06:49 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:35.296 00:06:49 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:35.296 00:06:49 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:35.296 00:06:49 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:35.296 00:06:49 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:35.296 00:06:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:35.296 00:06:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.296 00:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.296 ************************************ 00:12:35.296 START TEST test_save_ublk_config 00:12:35.296 ************************************ 00:12:35.296 00:06:49 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:35.296 00:06:49 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:35.296 00:06:49 -- ublk/ublk.sh@103 -- # tgtpid=80507 00:12:35.296 00:06:49 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:35.296 00:06:49 -- ublk/ublk.sh@106 -- # waitforlisten 80507 00:12:35.296 00:06:49 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:35.296 00:06:49 -- common/autotest_common.sh@829 -- # '[' -z 80507 ']' 00:12:35.296 00:06:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.296 00:06:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.296 00:06:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.296 00:06:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.296 00:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:35.296 [2024-11-28 00:06:49.817910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.296 [2024-11-28 00:06:49.818021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80507 ] 00:12:35.555 [2024-11-28 00:06:49.957325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.555 [2024-11-28 00:06:49.996920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.555 [2024-11-28 00:06:49.997162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.121 00:06:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.121 00:06:50 -- common/autotest_common.sh@862 -- # return 0 00:12:36.121 00:06:50 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:36.121 00:06:50 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:36.122 00:06:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.122 00:06:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.122 [2024-11-28 00:06:50.628593] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:36.122 malloc0 00:12:36.122 [2024-11-28 00:06:50.652479] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:36.122 [2024-11-28 00:06:50.652565] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:36.122 [2024-11-28 00:06:50.652573] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:36.122 [2024-11-28 00:06:50.652581] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:36.122 [2024-11-28 00:06:50.661449] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:36.122 [2024-11-28 00:06:50.661477] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:36.122 [2024-11-28 00:06:50.668389] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:36.122 [2024-11-28 00:06:50.668484] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:36.122 [2024-11-28 00:06:50.685384] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:36.122 0 00:12:36.122 00:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.122 00:06:50 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:36.122 00:06:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.122 00:06:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.380 00:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.380 00:06:50 -- ublk/ublk.sh@115 -- # config='{ 00:12:36.380 "subsystems": [ 00:12:36.380 { 00:12:36.380 "subsystem": "iobuf", 00:12:36.380 "config": [ 00:12:36.380 { 00:12:36.380 "method": "iobuf_set_options", 00:12:36.380 "params": { 00:12:36.380 "small_pool_count": 8192, 00:12:36.380 "large_pool_count": 1024, 00:12:36.380 "small_bufsize": 8192, 00:12:36.380 "large_bufsize": 135168 00:12:36.380 } 00:12:36.380 } 00:12:36.380 ] 00:12:36.380 }, 00:12:36.380 { 00:12:36.380 "subsystem": "sock", 00:12:36.380 "config": [ 00:12:36.380 { 00:12:36.380 "method": "sock_impl_set_options", 00:12:36.380 "params": { 00:12:36.380 "impl_name": "posix", 00:12:36.380 "recv_buf_size": 2097152, 00:12:36.380 "send_buf_size": 2097152, 00:12:36.380 "enable_recv_pipe": true, 00:12:36.380 "enable_quickack": false, 00:12:36.380 "enable_placement_id": 0, 00:12:36.380 "enable_zerocopy_send_server": true, 00:12:36.380 "enable_zerocopy_send_client": false, 00:12:36.380 "zerocopy_threshold": 0, 00:12:36.380 "tls_version": 0, 00:12:36.380 "enable_ktls": false 00:12:36.380 } 00:12:36.380 }, 00:12:36.380 { 00:12:36.380 "method": "sock_impl_set_options", 00:12:36.380 "params": { 00:12:36.380 "impl_name": "ssl", 00:12:36.380 "recv_buf_size": 4096, 00:12:36.380 "send_buf_size": 4096, 00:12:36.380 "enable_recv_pipe": true, 00:12:36.381 "enable_quickack": false, 00:12:36.381 "enable_placement_id": 0, 00:12:36.381 "enable_zerocopy_send_server": true, 00:12:36.381 "enable_zerocopy_send_client": false, 00:12:36.381 "zerocopy_threshold": 0, 00:12:36.381 "tls_version": 0, 00:12:36.381 "enable_ktls": false 00:12:36.381 } 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "vmd", 00:12:36.381 "config": [] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "accel", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.381 "method": "accel_set_options", 00:12:36.381 "params": { 00:12:36.381 "small_cache_size": 128, 00:12:36.381 "large_cache_size": 16, 00:12:36.381 "task_count": 2048, 00:12:36.381 "sequence_count": 2048, 00:12:36.381 "buf_count": 2048 00:12:36.381 } 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "bdev", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.381 "method": "bdev_set_options", 00:12:36.381 "params": { 00:12:36.381 "bdev_io_pool_size": 65535, 00:12:36.381 "bdev_io_cache_size": 256, 00:12:36.381 "bdev_auto_examine": true, 00:12:36.381 "iobuf_small_cache_size": 128, 00:12:36.381 "iobuf_large_cache_size": 16 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_raid_set_options", 00:12:36.381 "params": { 00:12:36.381 "process_window_size_kb": 1024 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_iscsi_set_options", 00:12:36.381 "params": { 00:12:36.381 "timeout_sec": 30 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_nvme_set_options", 00:12:36.381 "params": { 00:12:36.381 "action_on_timeout": "none", 00:12:36.381 "timeout_us": 0, 00:12:36.381 "timeout_admin_us": 0, 00:12:36.381 "keep_alive_timeout_ms": 10000, 00:12:36.381 "transport_retry_count": 4, 00:12:36.381 "arbitration_burst": 0, 00:12:36.381 "low_priority_weight": 0, 00:12:36.381 "medium_priority_weight": 0, 00:12:36.381 "high_priority_weight": 0, 00:12:36.381 "nvme_adminq_poll_period_us": 10000, 00:12:36.381 "nvme_ioq_poll_period_us": 0, 00:12:36.381 "io_queue_requests": 0, 00:12:36.381 "delay_cmd_submit": true, 00:12:36.381 "bdev_retry_count": 3, 00:12:36.381 "transport_ack_timeout": 0, 00:12:36.381 "ctrlr_loss_timeout_sec": 0, 00:12:36.381 "reconnect_delay_sec": 0, 00:12:36.381 "fast_io_fail_timeout_sec": 0, 00:12:36.381 "generate_uuids": false, 00:12:36.381 "transport_tos": 0, 00:12:36.381 "io_path_stat": false, 00:12:36.381 "allow_accel_sequence": false 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_nvme_set_hotplug", 00:12:36.381 "params": { 00:12:36.381 "period_us": 100000, 00:12:36.381 "enable": false 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_malloc_create", 00:12:36.381 "params": { 00:12:36.381 "name": "malloc0", 00:12:36.381 "num_blocks": 8192, 00:12:36.381 "block_size": 4096, 00:12:36.381 "physical_block_size": 4096, 00:12:36.381 "uuid": "e4b690fc-aa8b-488e-944e-4ea6fc8f3edd", 00:12:36.381 "optimal_io_boundary": 0 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "bdev_wait_for_examine" 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "scsi", 00:12:36.381 "config": null 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "scheduler", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.381 "method": "framework_set_scheduler", 00:12:36.381 "params": { 00:12:36.381 "name": "static" 00:12:36.381 } 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "vhost_scsi", 00:12:36.381 "config": [] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "vhost_blk", 00:12:36.381 "config": [] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "ublk", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.381 "method": "ublk_create_target", 00:12:36.381 "params": { 00:12:36.381 "cpumask": "1" 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "ublk_start_disk", 00:12:36.381 "params": { 00:12:36.381 "bdev_name": "malloc0", 00:12:36.381 "ublk_id": 0, 00:12:36.381 "num_queues": 1, 00:12:36.381 "queue_depth": 128 00:12:36.381 } 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "nbd", 00:12:36.381 "config": [] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "nvmf", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.381 "method": "nvmf_set_config", 00:12:36.381 "params": { 00:12:36.381 "discovery_filter": "match_any", 00:12:36.381 "admin_cmd_passthru": { 00:12:36.381 "identify_ctrlr": false 00:12:36.381 } 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "nvmf_set_max_subsystems", 00:12:36.381 "params": { 00:12:36.381 "max_subsystems": 1024 00:12:36.381 } 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "method": "nvmf_set_crdt", 00:12:36.381 "params": { 00:12:36.381 "crdt1": 0, 00:12:36.381 "crdt2": 0, 00:12:36.381 "crdt3": 0 00:12:36.381 } 00:12:36.381 } 00:12:36.381 ] 00:12:36.381 }, 00:12:36.381 { 00:12:36.381 "subsystem": "iscsi", 00:12:36.381 "config": [ 00:12:36.381 { 00:12:36.382 "method": "iscsi_set_options", 00:12:36.382 "params": { 00:12:36.382 "node_base": "iqn.2016-06.io.spdk", 00:12:36.382 "max_sessions": 128, 00:12:36.382 "max_connections_per_session": 2, 00:12:36.382 "max_queue_depth": 64, 00:12:36.382 "default_time2wait": 2, 00:12:36.382 "default_time2retain": 20, 00:12:36.382 "first_burst_length": 8192, 00:12:36.382 "immediate_data": true, 00:12:36.382 "allow_duplicated_isid": false, 00:12:36.382 "error_recovery_level": 0, 00:12:36.382 "nop_timeout": 60, 00:12:36.382 "nop_in_interval": 30, 00:12:36.382 "disable_chap": false, 00:12:36.382 "require_chap": false, 00:12:36.382 "mutual_chap": false, 00:12:36.382 "chap_group": 0, 00:12:36.382 "max_large_datain_per_connection": 64, 00:12:36.382 "max_r2t_per_connection": 4, 00:12:36.382 "pdu_pool_size": 36864, 00:12:36.382 "immediate_data_pool_size": 16384, 00:12:36.382 "data_out_pool_size": 2048 00:12:36.382 } 00:12:36.382 } 00:12:36.382 ] 00:12:36.382 } 00:12:36.382 ] 00:12:36.382 }' 00:12:36.382 00:06:50 -- ublk/ublk.sh@116 -- # killprocess 80507 00:12:36.382 00:06:50 -- common/autotest_common.sh@936 -- # '[' -z 80507 ']' 00:12:36.382 00:06:50 -- common/autotest_common.sh@940 -- # kill -0 80507 00:12:36.382 00:06:50 -- common/autotest_common.sh@941 -- # uname 00:12:36.382 00:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.382 00:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80507 00:12:36.382 00:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.382 00:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.382 00:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80507' 00:12:36.382 killing process with pid 80507 00:12:36.382 00:06:50 -- common/autotest_common.sh@955 -- # kill 80507 00:12:36.382 00:06:50 -- common/autotest_common.sh@960 -- # wait 80507 00:12:36.640 [2024-11-28 00:06:51.133082] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:36.640 [2024-11-28 00:06:51.179456] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:36.640 [2024-11-28 00:06:51.179581] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:36.640 [2024-11-28 00:06:51.187397] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:36.640 [2024-11-28 00:06:51.187452] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:36.640 [2024-11-28 00:06:51.187462] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:36.640 [2024-11-28 00:06:51.187487] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:36.640 [2024-11-28 00:06:51.187621] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:36.899 00:06:51 -- ublk/ublk.sh@119 -- # tgtpid=80540 00:12:36.899 00:06:51 -- ublk/ublk.sh@121 -- # waitforlisten 80540 00:12:36.899 00:06:51 -- common/autotest_common.sh@829 -- # '[' -z 80540 ']' 00:12:36.899 00:06:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.899 00:06:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.899 00:06:51 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:36.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.899 00:06:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.899 00:06:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.899 00:06:51 -- common/autotest_common.sh@10 -- # set +x 00:12:36.899 00:06:51 -- ublk/ublk.sh@118 -- # echo '{ 00:12:36.899 "subsystems": [ 00:12:36.899 { 00:12:36.899 "subsystem": "iobuf", 00:12:36.899 "config": [ 00:12:36.899 { 00:12:36.899 "method": "iobuf_set_options", 00:12:36.899 "params": { 00:12:36.899 "small_pool_count": 8192, 00:12:36.899 "large_pool_count": 1024, 00:12:36.899 "small_bufsize": 8192, 00:12:36.899 "large_bufsize": 135168 00:12:36.899 } 00:12:36.899 } 00:12:36.899 ] 00:12:36.899 }, 00:12:36.899 { 00:12:36.899 "subsystem": "sock", 00:12:36.899 "config": [ 00:12:36.899 { 00:12:36.899 "method": "sock_impl_set_options", 00:12:36.899 "params": { 00:12:36.899 "impl_name": "posix", 00:12:36.900 "recv_buf_size": 2097152, 00:12:36.900 "send_buf_size": 2097152, 00:12:36.900 "enable_recv_pipe": true, 00:12:36.900 "enable_quickack": false, 00:12:36.900 "enable_placement_id": 0, 00:12:36.900 "enable_zerocopy_send_server": true, 00:12:36.900 "enable_zerocopy_send_client": false, 00:12:36.900 "zerocopy_threshold": 0, 00:12:36.900 "tls_version": 0, 00:12:36.900 "enable_ktls": false 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "sock_impl_set_options", 00:12:36.900 "params": { 00:12:36.900 "impl_name": "ssl", 00:12:36.900 "recv_buf_size": 4096, 00:12:36.900 "send_buf_size": 4096, 00:12:36.900 "enable_recv_pipe": true, 00:12:36.900 "enable_quickack": false, 00:12:36.900 "enable_placement_id": 0, 00:12:36.900 "enable_zerocopy_send_server": true, 00:12:36.900 "enable_zerocopy_send_client": false, 00:12:36.900 "zerocopy_threshold": 0, 00:12:36.900 "tls_version": 0, 00:12:36.900 "enable_ktls": false 00:12:36.900 } 00:12:36.900 } 00:12:36.900 ] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "vmd", 00:12:36.900 "config": [] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "accel", 00:12:36.900 "config": [ 00:12:36.900 { 00:12:36.900 "method": "accel_set_options", 00:12:36.900 "params": { 00:12:36.900 "small_cache_size": 128, 00:12:36.900 "large_cache_size": 16, 00:12:36.900 "task_count": 2048, 00:12:36.900 "sequence_count": 2048, 00:12:36.900 "buf_count": 2048 00:12:36.900 } 00:12:36.900 } 00:12:36.900 ] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "bdev", 00:12:36.900 "config": [ 00:12:36.900 { 00:12:36.900 "method": "bdev_set_options", 00:12:36.900 "params": { 00:12:36.900 "bdev_io_pool_size": 65535, 00:12:36.900 "bdev_io_cache_size": 256, 00:12:36.900 "bdev_auto_examine": true, 00:12:36.900 "iobuf_small_cache_size": 128, 00:12:36.900 "iobuf_large_cache_size": 16 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_raid_set_options", 00:12:36.900 "params": { 00:12:36.900 "process_window_size_kb": 1024 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_iscsi_set_options", 00:12:36.900 "params": { 00:12:36.900 "timeout_sec": 30 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_nvme_set_options", 00:12:36.900 "params": { 00:12:36.900 "action_on_timeout": "none", 00:12:36.900 "timeout_us": 0, 00:12:36.900 "timeout_admin_us": 0, 00:12:36.900 "keep_alive_timeout_ms": 10000, 00:12:36.900 "transport_retry_count": 4, 00:12:36.900 "arbitration_burst": 0, 00:12:36.900 "low_priority_weight": 0, 00:12:36.900 "medium_priority_weight": 0, 00:12:36.900 "high_priority_weight": 0, 00:12:36.900 "nvme_adminq_poll_period_us": 10000, 00:12:36.900 "nvme_ioq_poll_period_us": 0, 00:12:36.900 "io_queue_requests": 0, 00:12:36.900 "delay_cmd_submit": true, 00:12:36.900 "bdev_retry_count": 3, 00:12:36.900 "transport_ack_timeout": 0, 00:12:36.900 "ctrlr_loss_timeout_sec": 0, 00:12:36.900 "reconnect_delay_sec": 0, 00:12:36.900 "fast_io_fail_timeout_sec": 0, 00:12:36.900 "generate_uuids": false, 00:12:36.900 "transport_tos": 0, 00:12:36.900 "io_path_stat": false, 00:12:36.900 "allow_accel_sequence": false 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_nvme_set_hotplug", 00:12:36.900 "params": { 00:12:36.900 "period_us": 100000, 00:12:36.900 "enable": false 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_malloc_create", 00:12:36.900 "params": { 00:12:36.900 "name": "malloc0", 00:12:36.900 "num_blocks": 8192, 00:12:36.900 "block_size": 4096, 00:12:36.900 "physical_block_size": 4096, 00:12:36.900 "uuid": "e4b690fc-aa8b-488e-944e-4ea6fc8f3edd", 00:12:36.900 "optimal_io_boundary": 0 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "bdev_wait_for_examine" 00:12:36.900 } 00:12:36.900 ] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "scsi", 00:12:36.900 "config": null 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "scheduler", 00:12:36.900 "config": [ 00:12:36.900 { 00:12:36.900 "method": "framework_set_scheduler", 00:12:36.900 "params": { 00:12:36.900 "name": "static" 00:12:36.900 } 00:12:36.900 } 00:12:36.900 ] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "vhost_scsi", 00:12:36.900 "config": [] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "vhost_blk", 00:12:36.900 "config": [] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "ublk", 00:12:36.900 "config": [ 00:12:36.900 { 00:12:36.900 "method": "ublk_create_target", 00:12:36.900 "params": { 00:12:36.900 "cpumask": "1" 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "ublk_start_disk", 00:12:36.900 "params": { 00:12:36.900 "bdev_name": "malloc0", 00:12:36.900 "ublk_id": 0, 00:12:36.900 "num_queues": 1, 00:12:36.900 "queue_depth": 128 00:12:36.900 } 00:12:36.900 } 00:12:36.900 ] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "nbd", 00:12:36.900 "config": [] 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "subsystem": "nvmf", 00:12:36.900 "config": [ 00:12:36.900 { 00:12:36.900 "method": "nvmf_set_config", 00:12:36.900 "params": { 00:12:36.900 "discovery_filter": "match_any", 00:12:36.900 "admin_cmd_passthru": { 00:12:36.900 "identify_ctrlr": false 00:12:36.900 } 00:12:36.900 } 00:12:36.900 }, 00:12:36.900 { 00:12:36.900 "method": "nvmf_set_max_subsystems", 00:12:36.900 "params": { 00:12:36.900 "max_subsystems": 1024 00:12:36.900 } 00:12:36.900 }, 00:12:36.901 { 00:12:36.901 "method": "nvmf_set_crdt", 00:12:36.901 "params": { 00:12:36.901 "crdt1": 0, 00:12:36.901 "crdt2": 0, 00:12:36.901 "crdt3": 0 00:12:36.901 } 00:12:36.901 } 00:12:36.901 ] 00:12:36.901 }, 00:12:36.901 { 00:12:36.901 "subsystem": "iscsi", 00:12:36.901 "config": [ 00:12:36.901 { 00:12:36.901 "method": "iscsi_set_options", 00:12:36.901 "params": { 00:12:36.901 "node_base": "iqn.2016-06.io.spdk", 00:12:36.901 "max_sessions": 128, 00:12:36.901 "max_connections_per_session": 2, 00:12:36.901 "max_queue_depth": 64, 00:12:36.901 "default_time2wait": 2, 00:12:36.901 "default_time2retain": 20, 00:12:36.901 "first_burst_length": 8192, 00:12:36.901 "immediate_data": true, 00:12:36.901 "allow_duplicated_isid": false, 00:12:36.901 "error_recovery_level": 0, 00:12:36.901 "nop_timeout": 60, 00:12:36.901 "nop_in_interval": 30, 00:12:36.901 "disable_chap": false, 00:12:36.901 "require_chap": false, 00:12:36.901 "mutual_chap": false, 00:12:36.901 "chap_group": 0, 00:12:36.901 "max_large_datain_per_connection": 64, 00:12:36.901 "max_r2t_per_connection": 4, 00:12:36.901 "pdu_pool_size": 36864, 00:12:36.901 "immediate_data_pool_size": 16384, 00:12:36.901 "data_out_pool_size": 2048 00:12:36.901 } 00:12:36.901 } 00:12:36.901 ] 00:12:36.901 } 00:12:36.901 ] 00:12:36.901 }' 00:12:37.159 [2024-11-28 00:06:51.534317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.159 [2024-11-28 00:06:51.534439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80540 ] 00:12:37.159 [2024-11-28 00:06:51.681592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.159 [2024-11-28 00:06:51.721760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.159 [2024-11-28 00:06:51.721947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.417 [2024-11-28 00:06:51.997580] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:37.417 [2024-11-28 00:06:52.005474] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:37.417 [2024-11-28 00:06:52.005543] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:37.417 [2024-11-28 00:06:52.005553] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:37.417 [2024-11-28 00:06:52.005563] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:37.417 [2024-11-28 00:06:52.014441] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:37.417 [2024-11-28 00:06:52.014465] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:37.675 [2024-11-28 00:06:52.021390] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:37.675 [2024-11-28 00:06:52.021478] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:37.675 [2024-11-28 00:06:52.038385] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:37.934 00:06:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.934 00:06:52 -- common/autotest_common.sh@862 -- # return 0 00:12:37.934 00:06:52 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:37.934 00:06:52 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:37.934 00:06:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.934 00:06:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.934 00:06:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.934 00:06:52 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:37.934 00:06:52 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:37.934 00:06:52 -- ublk/ublk.sh@125 -- # killprocess 80540 00:12:37.934 00:06:52 -- common/autotest_common.sh@936 -- # '[' -z 80540 ']' 00:12:37.934 00:06:52 -- common/autotest_common.sh@940 -- # kill -0 80540 00:12:37.934 00:06:52 -- common/autotest_common.sh@941 -- # uname 00:12:37.934 00:06:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:37.934 00:06:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80540 00:12:37.934 00:06:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:37.934 killing process with pid 80540 00:12:37.934 00:06:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:37.934 00:06:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80540' 00:12:37.934 00:06:52 -- common/autotest_common.sh@955 -- # kill 80540 00:12:37.934 00:06:52 -- common/autotest_common.sh@960 -- # wait 80540 00:12:38.192 [2024-11-28 00:06:52.574026] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:38.192 [2024-11-28 00:06:52.610459] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:38.192 [2024-11-28 00:06:52.610578] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:38.192 [2024-11-28 00:06:52.618389] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:38.192 [2024-11-28 00:06:52.618437] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:38.192 [2024-11-28 00:06:52.618444] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:38.192 [2024-11-28 00:06:52.618471] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:38.192 [2024-11-28 00:06:52.618607] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:38.451 00:06:52 -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:38.451 00:12:38.451 real 0m3.158s 00:12:38.451 user 0m2.291s 00:12:38.451 sys 0m1.419s 00:12:38.451 00:06:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:38.451 ************************************ 00:12:38.451 END TEST test_save_ublk_config 00:12:38.451 ************************************ 00:12:38.451 00:06:52 -- common/autotest_common.sh@10 -- # set +x 00:12:38.451 00:06:52 -- ublk/ublk.sh@139 -- # spdk_pid=80596 00:12:38.451 00:06:52 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:38.451 00:06:52 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.451 00:06:52 -- ublk/ublk.sh@141 -- # waitforlisten 80596 00:12:38.451 00:06:52 -- common/autotest_common.sh@829 -- # '[' -z 80596 ']' 00:12:38.451 00:06:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.451 00:06:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.451 00:06:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.451 00:06:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.451 00:06:52 -- common/autotest_common.sh@10 -- # set +x 00:12:38.451 [2024-11-28 00:06:53.000666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:38.451 [2024-11-28 00:06:53.000768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80596 ] 00:12:38.710 [2024-11-28 00:06:53.147057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.710 [2024-11-28 00:06:53.176798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.710 [2024-11-28 00:06:53.177235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.710 [2024-11-28 00:06:53.177306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.276 00:06:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.276 00:06:53 -- common/autotest_common.sh@862 -- # return 0 00:12:39.276 00:06:53 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:39.276 00:06:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:39.277 00:06:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.277 00:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.277 ************************************ 00:12:39.277 START TEST test_create_ublk 00:12:39.277 ************************************ 00:12:39.277 00:06:53 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:12:39.277 00:06:53 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:39.277 00:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.277 00:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.277 [2024-11-28 00:06:53.829349] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:39.277 00:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.277 00:06:53 -- ublk/ublk.sh@33 -- # ublk_target= 00:12:39.277 00:06:53 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:39.277 00:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.277 00:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.277 00:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.277 00:06:53 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:39.277 00:06:53 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:39.277 00:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.277 00:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.536 [2024-11-28 00:06:53.884504] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:39.536 [2024-11-28 00:06:53.884870] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:39.536 [2024-11-28 00:06:53.884884] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:39.536 [2024-11-28 00:06:53.884893] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:39.536 [2024-11-28 00:06:53.893557] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:39.536 [2024-11-28 00:06:53.893585] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:39.536 [2024-11-28 00:06:53.900390] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:39.536 [2024-11-28 00:06:53.908446] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:39.536 [2024-11-28 00:06:53.923395] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:39.536 00:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.536 00:06:53 -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:39.536 00:06:53 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:39.536 00:06:53 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:39.536 00:06:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.536 00:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.536 00:06:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.536 00:06:53 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:39.536 { 00:12:39.536 "ublk_device": "/dev/ublkb0", 00:12:39.536 "id": 0, 00:12:39.536 "queue_depth": 512, 00:12:39.536 "num_queues": 4, 00:12:39.536 "bdev_name": "Malloc0" 00:12:39.536 } 00:12:39.536 ]' 00:12:39.536 00:06:53 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:39.536 00:06:53 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:39.536 00:06:53 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:39.536 00:06:54 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:39.536 00:06:54 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:39.536 00:06:54 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:39.536 00:06:54 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:39.536 00:06:54 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:39.536 00:06:54 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:39.536 00:06:54 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:39.536 00:06:54 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:39.536 00:06:54 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:39.536 00:06:54 -- lvol/common.sh@41 -- # local offset=0 00:12:39.536 00:06:54 -- lvol/common.sh@42 -- # local size=134217728 00:12:39.536 00:06:54 -- lvol/common.sh@43 -- # local rw=write 00:12:39.536 00:06:54 -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:39.536 00:06:54 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:39.536 00:06:54 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:39.536 00:06:54 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:39.536 00:06:54 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:39.536 00:06:54 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:39.536 00:06:54 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:39.794 fio: verification read phase will never start because write phase uses all of runtime 00:12:39.794 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:39.794 fio-3.35 00:12:39.794 Starting 1 process 00:12:49.765 00:12:49.765 fio_test: (groupid=0, jobs=1): err= 0: pid=80635: Thu Nov 28 00:07:04 2024 00:12:49.765 write: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(723MiB/10001msec); 0 zone resets 00:12:49.765 clat (usec): min=35, max=9896, avg=53.22, stdev=117.98 00:12:49.765 lat (usec): min=36, max=9896, avg=53.70, stdev=118.00 00:12:49.765 clat percentiles (usec): 00:12:49.765 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:12:49.765 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 49], 00:12:49.765 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 58], 95.00th=[ 63], 00:12:49.765 | 99.00th=[ 72], 99.50th=[ 80], 99.90th=[ 2442], 99.95th=[ 3326], 00:12:49.765 | 99.99th=[ 3884] 00:12:49.765 bw ( KiB/s): min=27104, max=81200, per=99.70%, avg=73772.63, stdev=12674.87, samples=19 00:12:49.765 iops : min= 6776, max=20300, avg=18443.16, stdev=3168.72, samples=19 00:12:49.765 lat (usec) : 50=70.68%, 100=28.99%, 250=0.12%, 500=0.02%, 750=0.01% 00:12:49.765 lat (usec) : 1000=0.01% 00:12:49.765 lat (msec) : 2=0.04%, 4=0.12%, 10=0.01% 00:12:49.765 cpu : usr=2.96%, sys=14.36%, ctx=185002, majf=0, minf=797 00:12:49.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:49.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.765 issued rwts: total=0,185007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:49.765 00:12:49.765 Run status group 0 (all jobs): 00:12:49.765 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=723MiB (758MB), run=10001-10001msec 00:12:49.765 00:12:49.765 Disk stats (read/write): 00:12:49.765 ublkb0: ios=0/183114, merge=0/0, ticks=0/8276, in_queue=8276, util=98.87% 00:12:49.765 00:07:04 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:12:49.765 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.765 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:49.765 [2024-11-28 00:07:04.325952] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:49.765 [2024-11-28 00:07:04.362831] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:49.765 [2024-11-28 00:07:04.363730] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:50.022 [2024-11-28 00:07:04.370389] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:50.022 [2024-11-28 00:07:04.370615] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:50.022 [2024-11-28 00:07:04.370627] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:50.022 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.022 00:07:04 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:12:50.022 00:07:04 -- common/autotest_common.sh@650 -- # local es=0 00:12:50.022 00:07:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:12:50.022 00:07:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:50.022 00:07:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.022 00:07:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:50.022 00:07:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.022 00:07:04 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:12:50.022 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.022 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 [2024-11-28 00:07:04.386452] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:12:50.022 request: 00:12:50.022 { 00:12:50.022 "ublk_id": 0, 00:12:50.022 "method": "ublk_stop_disk", 00:12:50.022 "req_id": 1 00:12:50.022 } 00:12:50.022 Got JSON-RPC error response 00:12:50.022 response: 00:12:50.022 { 00:12:50.022 "code": -19, 00:12:50.022 "message": "No such device" 00:12:50.022 } 00:12:50.022 00:07:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:50.022 00:07:04 -- common/autotest_common.sh@653 -- # es=1 00:12:50.022 00:07:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.022 00:07:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.022 00:07:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.022 00:07:04 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:12:50.022 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.022 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 [2024-11-28 00:07:04.402433] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:50.022 [2024-11-28 00:07:04.403353] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:50.022 [2024-11-28 00:07:04.403395] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:50.022 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.022 00:07:04 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:50.022 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.022 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.023 00:07:04 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:12:50.023 00:07:04 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:50.023 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.023 00:07:04 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:50.023 00:07:04 -- lvol/common.sh@26 -- # jq length 00:12:50.023 00:07:04 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:50.023 00:07:04 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:50.023 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.023 00:07:04 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:50.023 00:07:04 -- lvol/common.sh@28 -- # jq length 00:12:50.023 00:07:04 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:50.023 00:12:50.023 real 0m10.736s 00:12:50.023 user 0m0.599s 00:12:50.023 sys 0m1.492s 00:12:50.023 00:07:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:50.023 ************************************ 00:12:50.023 END TEST test_create_ublk 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 ************************************ 00:12:50.023 00:07:04 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:12:50.023 00:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:50.023 00:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 ************************************ 00:12:50.023 START TEST test_create_multi_ublk 00:12:50.023 ************************************ 00:12:50.023 00:07:04 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:12:50.023 00:07:04 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:12:50.023 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 [2024-11-28 00:07:04.598166] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:50.023 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.023 00:07:04 -- ublk/ublk.sh@62 -- # ublk_target= 00:12:50.023 00:07:04 -- ublk/ublk.sh@64 -- # seq 0 3 00:12:50.023 00:07:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.023 00:07:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:12:50.023 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.023 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 00:07:04 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:12:50.280 00:07:04 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:50.280 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 [2024-11-28 00:07:04.677489] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:50.280 [2024-11-28 00:07:04.677781] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:50.280 [2024-11-28 00:07:04.677793] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:50.280 [2024-11-28 00:07:04.677805] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:50.280 [2024-11-28 00:07:04.689415] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:50.280 [2024-11-28 00:07:04.689436] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:50.280 [2024-11-28 00:07:04.701378] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:50.280 [2024-11-28 00:07:04.701850] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:50.280 [2024-11-28 00:07:04.712531] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:50.280 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 00:07:04 -- ublk/ublk.sh@68 -- # ublk_id=0 00:12:50.280 00:07:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.280 00:07:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:12:50.280 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 00:07:04 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:12:50.280 00:07:04 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:12:50.280 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 [2024-11-28 00:07:04.794477] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:12:50.280 [2024-11-28 00:07:04.794768] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:12:50.280 [2024-11-28 00:07:04.794782] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:50.280 [2024-11-28 00:07:04.794787] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:50.280 [2024-11-28 00:07:04.806394] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:50.280 [2024-11-28 00:07:04.806411] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:50.280 [2024-11-28 00:07:04.818384] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:50.280 [2024-11-28 00:07:04.818858] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:50.280 [2024-11-28 00:07:04.843389] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:50.280 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 00:07:04 -- ublk/ublk.sh@68 -- # ublk_id=1 00:12:50.280 00:07:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.280 00:07:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:12:50.280 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 00:07:04 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:12:50.539 00:07:04 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:12:50.539 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 [2024-11-28 00:07:04.926464] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:12:50.539 [2024-11-28 00:07:04.926756] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:12:50.539 [2024-11-28 00:07:04.926768] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:12:50.539 [2024-11-28 00:07:04.926774] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:12:50.539 [2024-11-28 00:07:04.938392] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:50.539 [2024-11-28 00:07:04.938412] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:50.539 [2024-11-28 00:07:04.950381] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:50.539 [2024-11-28 00:07:04.950865] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:12:50.539 [2024-11-28 00:07:04.961373] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:12:50.539 00:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 00:07:04 -- ublk/ublk.sh@68 -- # ublk_id=2 00:12:50.539 00:07:04 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.539 00:07:04 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:12:50.539 00:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 00:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 00:07:05 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:12:50.539 00:07:05 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:12:50.539 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 [2024-11-28 00:07:05.033481] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:12:50.539 [2024-11-28 00:07:05.033764] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:12:50.539 [2024-11-28 00:07:05.033777] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:12:50.539 [2024-11-28 00:07:05.033782] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:12:50.539 [2024-11-28 00:07:05.045401] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:50.539 [2024-11-28 00:07:05.045416] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:50.539 [2024-11-28 00:07:05.057395] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:50.539 [2024-11-28 00:07:05.057866] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:12:50.539 [2024-11-28 00:07:05.082394] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:12:50.539 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 00:07:05 -- ublk/ublk.sh@68 -- # ublk_id=3 00:12:50.539 00:07:05 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:12:50.539 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 00:07:05 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:12:50.539 { 00:12:50.539 "ublk_device": "/dev/ublkb0", 00:12:50.539 "id": 0, 00:12:50.539 "queue_depth": 512, 00:12:50.539 "num_queues": 4, 00:12:50.539 "bdev_name": "Malloc0" 00:12:50.539 }, 00:12:50.539 { 00:12:50.539 "ublk_device": "/dev/ublkb1", 00:12:50.539 "id": 1, 00:12:50.539 "queue_depth": 512, 00:12:50.539 "num_queues": 4, 00:12:50.539 "bdev_name": "Malloc1" 00:12:50.539 }, 00:12:50.539 { 00:12:50.539 "ublk_device": "/dev/ublkb2", 00:12:50.539 "id": 2, 00:12:50.539 "queue_depth": 512, 00:12:50.539 "num_queues": 4, 00:12:50.539 "bdev_name": "Malloc2" 00:12:50.539 }, 00:12:50.539 { 00:12:50.539 "ublk_device": "/dev/ublkb3", 00:12:50.539 "id": 3, 00:12:50.539 "queue_depth": 512, 00:12:50.539 "num_queues": 4, 00:12:50.539 "bdev_name": "Malloc3" 00:12:50.539 } 00:12:50.539 ]' 00:12:50.539 00:07:05 -- ublk/ublk.sh@72 -- # seq 0 3 00:12:50.539 00:07:05 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.539 00:07:05 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:12:50.797 00:07:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:12:50.797 00:07:05 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:12:50.797 00:07:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:12:50.797 00:07:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:12:50.797 00:07:05 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:50.797 00:07:05 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:12:50.797 00:07:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:12:50.797 00:07:05 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:12:50.797 00:07:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:12:50.797 00:07:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:50.797 00:07:05 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:12:51.056 00:07:05 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:12:51.056 00:07:05 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.056 00:07:05 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:12:51.056 00:07:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:12:51.056 00:07:05 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:12:51.056 00:07:05 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:12:51.057 00:07:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:12:51.057 00:07:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:12:51.057 00:07:05 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.057 00:07:05 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:12:51.057 00:07:05 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:12:51.057 00:07:05 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:12:51.057 00:07:05 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:12:51.315 00:07:05 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:12:51.315 00:07:05 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:12:51.315 00:07:05 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@85 -- # seq 0 3 00:12:51.315 00:07:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.315 00:07:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:12:51.315 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.315 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:51.315 [2024-11-28 00:07:05.751461] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:51.315 [2024-11-28 00:07:05.784844] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:51.315 [2024-11-28 00:07:05.785760] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:51.315 [2024-11-28 00:07:05.794395] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:51.315 [2024-11-28 00:07:05.794621] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:51.315 [2024-11-28 00:07:05.794640] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:51.315 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.315 00:07:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:12:51.315 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.315 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:51.315 [2024-11-28 00:07:05.810435] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:12:51.315 [2024-11-28 00:07:05.839828] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:51.315 [2024-11-28 00:07:05.840809] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:12:51.315 [2024-11-28 00:07:05.846383] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:51.315 [2024-11-28 00:07:05.846604] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:12:51.315 [2024-11-28 00:07:05.846617] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:12:51.315 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.315 00:07:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:12:51.315 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.315 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:51.315 [2024-11-28 00:07:05.859449] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:12:51.315 [2024-11-28 00:07:05.897823] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:51.315 [2024-11-28 00:07:05.898734] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:12:51.315 [2024-11-28 00:07:05.905385] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:51.315 [2024-11-28 00:07:05.905591] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:12:51.315 [2024-11-28 00:07:05.905604] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:12:51.315 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.315 00:07:05 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.315 00:07:05 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:12:51.315 00:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.316 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:51.574 [2024-11-28 00:07:05.919451] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:12:51.574 [2024-11-28 00:07:05.955405] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:51.574 [2024-11-28 00:07:05.955968] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:12:51.574 [2024-11-28 00:07:05.956626] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:51.574 [2024-11-28 00:07:05.956833] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:12:51.574 [2024-11-28 00:07:05.956843] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:12:51.574 00:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.574 00:07:05 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:12:51.574 [2024-11-28 00:07:06.127437] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:51.574 [2024-11-28 00:07:06.128310] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:51.574 [2024-11-28 00:07:06.128337] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:51.574 00:07:06 -- ublk/ublk.sh@93 -- # seq 0 3 00:12:51.574 00:07:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.574 00:07:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:51.574 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.574 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.833 00:07:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.833 00:07:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:12:51.833 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.833 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.833 00:07:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.833 00:07:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:51.833 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.833 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.833 00:07:06 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:51.833 00:07:06 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:12:51.833 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.833 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.833 00:07:06 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:12:51.833 00:07:06 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:51.833 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.833 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.834 00:07:06 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:51.834 00:07:06 -- lvol/common.sh@26 -- # jq length 00:12:51.834 00:07:06 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:51.834 00:07:06 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:51.834 00:07:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.834 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:51.834 00:07:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.834 00:07:06 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:52.093 00:07:06 -- lvol/common.sh@28 -- # jq length 00:12:52.093 00:07:06 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:52.093 00:12:52.093 real 0m1.877s 00:12:52.093 user 0m0.771s 00:12:52.093 sys 0m0.143s 00:12:52.093 00:07:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.093 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:52.093 ************************************ 00:12:52.093 END TEST test_create_multi_ublk 00:12:52.093 ************************************ 00:12:52.093 00:07:06 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:12:52.093 00:07:06 -- ublk/ublk.sh@147 -- # cleanup 00:12:52.093 00:07:06 -- ublk/ublk.sh@130 -- # killprocess 80596 00:12:52.093 00:07:06 -- common/autotest_common.sh@936 -- # '[' -z 80596 ']' 00:12:52.093 00:07:06 -- common/autotest_common.sh@940 -- # kill -0 80596 00:12:52.093 00:07:06 -- common/autotest_common.sh@941 -- # uname 00:12:52.093 00:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.093 00:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80596 00:12:52.093 00:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.093 00:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.093 00:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80596' 00:12:52.093 killing process with pid 80596 00:12:52.093 00:07:06 -- common/autotest_common.sh@955 -- # kill 80596 00:12:52.093 00:07:06 -- common/autotest_common.sh@960 -- # wait 80596 00:12:52.093 [2024-11-28 00:07:06.679850] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:52.093 [2024-11-28 00:07:06.679906] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:52.352 00:12:52.352 real 0m17.332s 00:12:52.352 user 0m27.424s 00:12:52.352 sys 0m7.304s 00:12:52.352 00:07:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.352 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:52.352 ************************************ 00:12:52.352 END TEST ublk 00:12:52.352 ************************************ 00:12:52.611 00:07:06 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:52.611 00:07:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:52.611 00:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.611 00:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:52.611 ************************************ 00:12:52.611 START TEST ublk_recovery 00:12:52.611 ************************************ 00:12:52.611 00:07:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:52.611 * Looking for test storage... 00:12:52.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:52.611 00:07:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:52.611 00:07:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:52.611 00:07:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.611 00:07:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.611 00:07:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.611 00:07:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.611 00:07:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.611 00:07:07 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.611 00:07:07 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.611 00:07:07 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.611 00:07:07 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.611 00:07:07 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.611 00:07:07 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.611 00:07:07 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.611 00:07:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.611 00:07:07 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.611 00:07:07 -- scripts/common.sh@344 -- # : 1 00:12:52.611 00:07:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.611 00:07:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.611 00:07:07 -- scripts/common.sh@364 -- # decimal 1 00:12:52.611 00:07:07 -- scripts/common.sh@352 -- # local d=1 00:12:52.611 00:07:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.611 00:07:07 -- scripts/common.sh@354 -- # echo 1 00:12:52.611 00:07:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:52.611 00:07:07 -- scripts/common.sh@365 -- # decimal 2 00:12:52.611 00:07:07 -- scripts/common.sh@352 -- # local d=2 00:12:52.611 00:07:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.611 00:07:07 -- scripts/common.sh@354 -- # echo 2 00:12:52.611 00:07:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:52.611 00:07:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:52.611 00:07:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:52.611 00:07:07 -- scripts/common.sh@367 -- # return 0 00:12:52.611 00:07:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.611 00:07:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.611 --rc genhtml_branch_coverage=1 00:12:52.611 --rc genhtml_function_coverage=1 00:12:52.611 --rc genhtml_legend=1 00:12:52.611 --rc geninfo_all_blocks=1 00:12:52.611 --rc geninfo_unexecuted_blocks=1 00:12:52.611 00:12:52.611 ' 00:12:52.611 00:07:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.611 --rc genhtml_branch_coverage=1 00:12:52.611 --rc genhtml_function_coverage=1 00:12:52.611 --rc genhtml_legend=1 00:12:52.611 --rc geninfo_all_blocks=1 00:12:52.611 --rc geninfo_unexecuted_blocks=1 00:12:52.611 00:12:52.611 ' 00:12:52.611 00:07:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.611 --rc genhtml_branch_coverage=1 00:12:52.611 --rc genhtml_function_coverage=1 00:12:52.611 --rc genhtml_legend=1 00:12:52.611 --rc geninfo_all_blocks=1 00:12:52.612 --rc geninfo_unexecuted_blocks=1 00:12:52.612 00:12:52.612 ' 00:12:52.612 00:07:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.612 --rc genhtml_branch_coverage=1 00:12:52.612 --rc genhtml_function_coverage=1 00:12:52.612 --rc genhtml_legend=1 00:12:52.612 --rc geninfo_all_blocks=1 00:12:52.612 --rc geninfo_unexecuted_blocks=1 00:12:52.612 00:12:52.612 ' 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:52.612 00:07:07 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:52.612 00:07:07 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:52.612 00:07:07 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:52.612 00:07:07 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:52.612 00:07:07 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:52.612 00:07:07 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:52.612 00:07:07 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:52.612 00:07:07 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=80955 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 80955 00:12:52.612 00:07:07 -- common/autotest_common.sh@829 -- # '[' -z 80955 ']' 00:12:52.612 00:07:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.612 00:07:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.612 00:07:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.612 00:07:07 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:52.612 00:07:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.612 00:07:07 -- common/autotest_common.sh@10 -- # set +x 00:12:52.612 [2024-11-28 00:07:07.165505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.612 [2024-11-28 00:07:07.165615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80955 ] 00:12:52.870 [2024-11-28 00:07:07.312015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:52.871 [2024-11-28 00:07:07.342119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.871 [2024-11-28 00:07:07.342535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.871 [2024-11-28 00:07:07.342680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.438 00:07:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.438 00:07:07 -- common/autotest_common.sh@862 -- # return 0 00:12:53.438 00:07:07 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:12:53.438 00:07:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.438 00:07:07 -- common/autotest_common.sh@10 -- # set +x 00:12:53.438 [2024-11-28 00:07:07.982239] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:53.438 00:07:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.438 00:07:07 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:12:53.438 00:07:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.438 00:07:07 -- common/autotest_common.sh@10 -- # set +x 00:12:53.438 malloc0 00:12:53.438 00:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.438 00:07:08 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:12:53.438 00:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.438 00:07:08 -- common/autotest_common.sh@10 -- # set +x 00:12:53.438 [2024-11-28 00:07:08.013478] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:12:53.438 [2024-11-28 00:07:08.013565] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:12:53.438 [2024-11-28 00:07:08.013576] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:53.438 [2024-11-28 00:07:08.013583] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:53.438 [2024-11-28 00:07:08.022451] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:53.438 [2024-11-28 00:07:08.022473] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:53.438 [2024-11-28 00:07:08.029384] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:53.438 [2024-11-28 00:07:08.029495] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:53.438 [2024-11-28 00:07:08.033772] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:53.695 1 00:12:53.695 00:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.695 00:07:08 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:12:54.630 00:07:09 -- ublk/ublk_recovery.sh@31 -- # fio_proc=80982 00:12:54.630 00:07:09 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:12:54.630 00:07:09 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:12:54.630 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:54.630 fio-3.35 00:12:54.630 Starting 1 process 00:12:59.897 00:07:14 -- ublk/ublk_recovery.sh@36 -- # kill -9 80955 00:12:59.897 00:07:14 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:05.162 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 80955 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:05.162 00:07:19 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=81093 00:13:05.162 00:07:19 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:05.162 00:07:19 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:05.162 00:07:19 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 81093 00:13:05.162 00:07:19 -- common/autotest_common.sh@829 -- # '[' -z 81093 ']' 00:13:05.162 00:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.162 00:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.162 00:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.162 00:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.162 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:13:05.162 [2024-11-28 00:07:19.124442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:05.162 [2024-11-28 00:07:19.124757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81093 ] 00:13:05.162 [2024-11-28 00:07:19.271052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:05.162 [2024-11-28 00:07:19.299666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.162 [2024-11-28 00:07:19.300018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.162 [2024-11-28 00:07:19.300168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.421 00:07:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.421 00:07:19 -- common/autotest_common.sh@862 -- # return 0 00:13:05.421 00:07:19 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:05.421 00:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.421 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:13:05.421 [2024-11-28 00:07:19.908256] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:05.421 00:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.421 00:07:19 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:05.421 00:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.421 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:13:05.421 malloc0 00:13:05.421 00:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.421 00:07:19 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:05.421 00:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.421 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:13:05.421 [2024-11-28 00:07:19.941485] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:05.421 [2024-11-28 00:07:19.941520] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:05.421 [2024-11-28 00:07:19.941527] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:05.421 [2024-11-28 00:07:19.947418] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:05.421 [2024-11-28 00:07:19.947436] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:05.421 1 00:13:05.421 [2024-11-28 00:07:19.947500] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:05.421 00:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.421 00:07:19 -- ublk/ublk_recovery.sh@52 -- # wait 80982 00:13:31.987 [2024-11-28 00:07:44.239386] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:31.987 [2024-11-28 00:07:44.245889] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:31.987 [2024-11-28 00:07:44.253559] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:31.987 [2024-11-28 00:07:44.253582] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:13:58.532 00:13:58.532 fio_test: (groupid=0, jobs=1): err= 0: pid=80991: Thu Nov 28 00:08:09 2024 00:13:58.532 read: IOPS=16.7k, BW=65.1MiB/s (68.2MB/s)(3904MiB/60002msec) 00:13:58.532 slat (nsec): min=847, max=662338, avg=4788.06, stdev=1839.42 00:13:58.532 clat (usec): min=702, max=30212k, avg=3682.57, stdev=234068.53 00:13:58.532 lat (usec): min=707, max=30212k, avg=3687.36, stdev=234068.54 00:13:58.532 clat percentiles (usec): 00:13:58.532 | 1.00th=[ 1516], 5.00th=[ 1647], 10.00th=[ 1680], 20.00th=[ 1696], 00:13:58.532 | 30.00th=[ 1713], 40.00th=[ 1729], 50.00th=[ 1745], 60.00th=[ 1762], 00:13:58.532 | 70.00th=[ 1778], 80.00th=[ 1795], 90.00th=[ 1844], 95.00th=[ 2900], 00:13:58.532 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 6128], 99.95th=[ 6783], 00:13:58.532 | 99.99th=[12911] 00:13:58.532 bw ( KiB/s): min= 2538, max=138304, per=100.00%, avg=131151.63, stdev=21553.76, samples=60 00:13:58.532 iops : min= 634, max=34576, avg=32787.90, stdev=5388.49, samples=60 00:13:58.532 write: IOPS=16.6k, BW=65.0MiB/s (68.1MB/s)(3899MiB/60002msec); 0 zone resets 00:13:58.532 slat (nsec): min=900, max=293264, avg=4820.14, stdev=1683.54 00:13:58.532 clat (usec): min=538, max=30212k, avg=3997.33, stdev=249342.52 00:13:58.532 lat (usec): min=543, max=30212k, avg=4002.15, stdev=249342.52 00:13:58.532 clat percentiles (usec): 00:13:58.532 | 1.00th=[ 1549], 5.00th=[ 1729], 10.00th=[ 1762], 20.00th=[ 1778], 00:13:58.532 | 30.00th=[ 1795], 40.00th=[ 1811], 50.00th=[ 1827], 60.00th=[ 1844], 00:13:58.532 | 70.00th=[ 1860], 80.00th=[ 1876], 90.00th=[ 1926], 95.00th=[ 2835], 00:13:58.532 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6194], 99.95th=[ 6915], 00:13:58.532 | 99.99th=[12911] 00:13:58.532 bw ( KiB/s): min= 2578, max=138248, per=100.00%, avg=131005.10, stdev=21691.17, samples=60 00:13:58.532 iops : min= 644, max=34562, avg=32751.27, stdev=5422.84, samples=60 00:13:58.532 lat (usec) : 750=0.01%, 1000=0.01% 00:13:58.532 lat (msec) : 2=92.64%, 4=5.14%, 10=2.19%, 20=0.01%, >=2000=0.01% 00:13:58.532 cpu : usr=3.62%, sys=16.40%, ctx=69400, majf=0, minf=13 00:13:58.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:58.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.532 issued rwts: total=999318,998047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.532 00:13:58.532 Run status group 0 (all jobs): 00:13:58.532 READ: bw=65.1MiB/s (68.2MB/s), 65.1MiB/s-65.1MiB/s (68.2MB/s-68.2MB/s), io=3904MiB (4093MB), run=60002-60002msec 00:13:58.532 WRITE: bw=65.0MiB/s (68.1MB/s), 65.0MiB/s-65.0MiB/s (68.1MB/s-68.1MB/s), io=3899MiB (4088MB), run=60002-60002msec 00:13:58.532 00:13:58.532 Disk stats (read/write): 00:13:58.532 ublkb1: ios=995751/994480, merge=0/0, ticks=3623123/3856186, in_queue=7479309, util=99.90% 00:13:58.532 00:08:09 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:13:58.532 00:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.532 00:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 [2024-11-28 00:08:09.295863] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:58.532 [2024-11-28 00:08:09.337407] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:58.532 [2024-11-28 00:08:09.337531] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:58.532 [2024-11-28 00:08:09.345399] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:58.532 [2024-11-28 00:08:09.345533] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:58.532 [2024-11-28 00:08:09.345557] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:58.532 00:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.532 00:08:09 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:13:58.532 00:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.532 00:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 [2024-11-28 00:08:09.361439] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:58.532 [2024-11-28 00:08:09.362328] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:58.532 [2024-11-28 00:08:09.362355] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:58.532 00:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.532 00:08:09 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:13:58.532 00:08:09 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:13:58.532 00:08:09 -- ublk/ublk_recovery.sh@14 -- # killprocess 81093 00:13:58.532 00:08:09 -- common/autotest_common.sh@936 -- # '[' -z 81093 ']' 00:13:58.532 00:08:09 -- common/autotest_common.sh@940 -- # kill -0 81093 00:13:58.532 00:08:09 -- common/autotest_common.sh@941 -- # uname 00:13:58.532 00:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.532 00:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81093 00:13:58.532 killing process with pid 81093 00:13:58.532 00:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:58.532 00:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:58.532 00:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81093' 00:13:58.532 00:08:09 -- common/autotest_common.sh@955 -- # kill 81093 00:13:58.532 00:08:09 -- common/autotest_common.sh@960 -- # wait 81093 00:13:58.532 [2024-11-28 00:08:09.562707] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:58.532 [2024-11-28 00:08:09.562749] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:58.532 00:13:58.532 real 1m2.863s 00:13:58.532 user 1m46.183s 00:13:58.532 sys 0m21.098s 00:13:58.532 00:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.532 00:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 ************************************ 00:13:58.532 END TEST ublk_recovery 00:13:58.532 ************************************ 00:13:58.532 00:08:09 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@255 -- # timing_exit lib 00:13:58.532 00:08:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.532 00:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 00:08:09 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:13:58.532 00:08:09 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:58.532 00:08:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:58.532 00:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.532 00:08:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 ************************************ 00:13:58.532 START TEST ftl 00:13:58.532 ************************************ 00:13:58.532 00:08:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:58.532 * Looking for test storage... 00:13:58.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.532 00:08:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:58.532 00:08:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:58.532 00:08:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:58.532 00:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:58.532 00:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:58.532 00:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:58.532 00:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:58.532 00:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:13:58.532 00:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:13:58.532 00:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.532 00:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:13:58.532 00:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:13:58.532 00:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:13:58.532 00:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:13:58.532 00:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:58.532 00:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:13:58.532 00:08:10 -- scripts/common.sh@344 -- # : 1 00:13:58.532 00:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:58.532 00:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.532 00:08:10 -- scripts/common.sh@364 -- # decimal 1 00:13:58.532 00:08:10 -- scripts/common.sh@352 -- # local d=1 00:13:58.532 00:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.532 00:08:10 -- scripts/common.sh@354 -- # echo 1 00:13:58.532 00:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:58.532 00:08:10 -- scripts/common.sh@365 -- # decimal 2 00:13:58.532 00:08:10 -- scripts/common.sh@352 -- # local d=2 00:13:58.532 00:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.532 00:08:10 -- scripts/common.sh@354 -- # echo 2 00:13:58.532 00:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:58.532 00:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:58.532 00:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:58.532 00:08:10 -- scripts/common.sh@367 -- # return 0 00:13:58.532 00:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.532 00:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.532 --rc genhtml_branch_coverage=1 00:13:58.532 --rc genhtml_function_coverage=1 00:13:58.532 --rc genhtml_legend=1 00:13:58.532 --rc geninfo_all_blocks=1 00:13:58.532 --rc geninfo_unexecuted_blocks=1 00:13:58.532 00:13:58.532 ' 00:13:58.532 00:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.532 --rc genhtml_branch_coverage=1 00:13:58.532 --rc genhtml_function_coverage=1 00:13:58.532 --rc genhtml_legend=1 00:13:58.532 --rc geninfo_all_blocks=1 00:13:58.532 --rc geninfo_unexecuted_blocks=1 00:13:58.532 00:13:58.532 ' 00:13:58.532 00:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.532 --rc genhtml_branch_coverage=1 00:13:58.532 --rc genhtml_function_coverage=1 00:13:58.532 --rc genhtml_legend=1 00:13:58.532 --rc geninfo_all_blocks=1 00:13:58.532 --rc geninfo_unexecuted_blocks=1 00:13:58.532 00:13:58.532 ' 00:13:58.532 00:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.532 --rc genhtml_branch_coverage=1 00:13:58.532 --rc genhtml_function_coverage=1 00:13:58.532 --rc genhtml_legend=1 00:13:58.532 --rc geninfo_all_blocks=1 00:13:58.532 --rc geninfo_unexecuted_blocks=1 00:13:58.532 00:13:58.532 ' 00:13:58.532 00:08:10 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:58.532 00:08:10 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:58.532 00:08:10 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.532 00:08:10 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.533 00:08:10 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:58.533 00:08:10 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:58.533 00:08:10 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.533 00:08:10 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:58.533 00:08:10 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:58.533 00:08:10 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:10 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:10 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:58.533 00:08:10 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:58.533 00:08:10 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:58.533 00:08:10 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:58.533 00:08:10 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:58.533 00:08:10 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:58.533 00:08:10 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:10 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:10 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:58.533 00:08:10 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:58.533 00:08:10 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:58.533 00:08:10 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:58.533 00:08:10 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:58.533 00:08:10 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:58.533 00:08:10 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:58.533 00:08:10 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:58.533 00:08:10 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:58.533 00:08:10 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:58.533 00:08:10 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.533 00:08:10 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:13:58.533 00:08:10 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:13:58.533 00:08:10 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:13:58.533 00:08:10 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:13:58.533 00:08:10 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:58.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.533 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:58.533 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:58.533 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:58.533 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:58.533 00:08:10 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:58.533 00:08:10 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=81896 00:13:58.533 00:08:10 -- ftl/ftl.sh@38 -- # waitforlisten 81896 00:13:58.533 00:08:10 -- common/autotest_common.sh@829 -- # '[' -z 81896 ']' 00:13:58.533 00:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.533 00:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.533 00:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.533 00:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.533 00:08:10 -- common/autotest_common.sh@10 -- # set +x 00:13:58.533 [2024-11-28 00:08:10.575904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:58.533 [2024-11-28 00:08:10.576166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81896 ] 00:13:58.533 [2024-11-28 00:08:10.723408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.533 [2024-11-28 00:08:10.750037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.533 [2024-11-28 00:08:10.750338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.533 00:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.533 00:08:11 -- common/autotest_common.sh@862 -- # return 0 00:13:58.533 00:08:11 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:13:58.533 00:08:11 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:58.533 00:08:11 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:13:58.533 00:08:11 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:58.533 00:08:12 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:13:58.533 00:08:12 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:58.533 00:08:12 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:58.533 00:08:12 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:13:58.533 00:08:12 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:13:58.533 00:08:12 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:13:58.533 00:08:12 -- ftl/ftl.sh@50 -- # break 00:13:58.533 00:08:12 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:13:58.533 00:08:12 -- ftl/ftl.sh@59 -- # base_size=1310720 00:13:58.533 00:08:12 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:58.533 00:08:12 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:58.533 00:08:12 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:13:58.533 00:08:12 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:13:58.533 00:08:12 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:13:58.533 00:08:12 -- ftl/ftl.sh@63 -- # break 00:13:58.533 00:08:12 -- ftl/ftl.sh@66 -- # killprocess 81896 00:13:58.533 00:08:12 -- common/autotest_common.sh@936 -- # '[' -z 81896 ']' 00:13:58.533 00:08:12 -- common/autotest_common.sh@940 -- # kill -0 81896 00:13:58.533 00:08:12 -- common/autotest_common.sh@941 -- # uname 00:13:58.533 00:08:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.533 00:08:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81896 00:13:58.533 00:08:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:58.533 killing process with pid 81896 00:13:58.533 00:08:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:58.533 00:08:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81896' 00:13:58.533 00:08:12 -- common/autotest_common.sh@955 -- # kill 81896 00:13:58.533 00:08:12 -- common/autotest_common.sh@960 -- # wait 81896 00:13:58.533 00:08:12 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:13:58.533 00:08:12 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:13:58.533 00:08:12 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:58.533 00:08:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:58.533 00:08:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.533 00:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.533 ************************************ 00:13:58.533 START TEST ftl_fio_basic 00:13:58.533 ************************************ 00:13:58.533 00:08:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:58.533 * Looking for test storage... 00:13:58.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.533 00:08:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:58.533 00:08:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:58.533 00:08:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:58.533 00:08:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:58.533 00:08:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:58.533 00:08:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:58.533 00:08:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:58.533 00:08:13 -- scripts/common.sh@335 -- # IFS=.-: 00:13:58.533 00:08:13 -- scripts/common.sh@335 -- # read -ra ver1 00:13:58.533 00:08:13 -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.533 00:08:13 -- scripts/common.sh@336 -- # read -ra ver2 00:13:58.533 00:08:13 -- scripts/common.sh@337 -- # local 'op=<' 00:13:58.533 00:08:13 -- scripts/common.sh@339 -- # ver1_l=2 00:13:58.533 00:08:13 -- scripts/common.sh@340 -- # ver2_l=1 00:13:58.533 00:08:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:58.533 00:08:13 -- scripts/common.sh@343 -- # case "$op" in 00:13:58.533 00:08:13 -- scripts/common.sh@344 -- # : 1 00:13:58.533 00:08:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:58.533 00:08:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.533 00:08:13 -- scripts/common.sh@364 -- # decimal 1 00:13:58.533 00:08:13 -- scripts/common.sh@352 -- # local d=1 00:13:58.533 00:08:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.533 00:08:13 -- scripts/common.sh@354 -- # echo 1 00:13:58.533 00:08:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:58.533 00:08:13 -- scripts/common.sh@365 -- # decimal 2 00:13:58.533 00:08:13 -- scripts/common.sh@352 -- # local d=2 00:13:58.533 00:08:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.533 00:08:13 -- scripts/common.sh@354 -- # echo 2 00:13:58.533 00:08:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:58.533 00:08:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:58.533 00:08:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:58.533 00:08:13 -- scripts/common.sh@367 -- # return 0 00:13:58.533 00:08:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.533 00:08:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.533 --rc genhtml_branch_coverage=1 00:13:58.533 --rc genhtml_function_coverage=1 00:13:58.533 --rc genhtml_legend=1 00:13:58.533 --rc geninfo_all_blocks=1 00:13:58.533 --rc geninfo_unexecuted_blocks=1 00:13:58.533 00:13:58.533 ' 00:13:58.533 00:08:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.533 --rc genhtml_branch_coverage=1 00:13:58.533 --rc genhtml_function_coverage=1 00:13:58.533 --rc genhtml_legend=1 00:13:58.533 --rc geninfo_all_blocks=1 00:13:58.533 --rc geninfo_unexecuted_blocks=1 00:13:58.533 00:13:58.533 ' 00:13:58.533 00:08:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.533 --rc genhtml_branch_coverage=1 00:13:58.533 --rc genhtml_function_coverage=1 00:13:58.533 --rc genhtml_legend=1 00:13:58.533 --rc geninfo_all_blocks=1 00:13:58.533 --rc geninfo_unexecuted_blocks=1 00:13:58.533 00:13:58.533 ' 00:13:58.533 00:08:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.533 --rc genhtml_branch_coverage=1 00:13:58.533 --rc genhtml_function_coverage=1 00:13:58.533 --rc genhtml_legend=1 00:13:58.533 --rc geninfo_all_blocks=1 00:13:58.533 --rc geninfo_unexecuted_blocks=1 00:13:58.533 00:13:58.533 ' 00:13:58.533 00:08:13 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:58.533 00:08:13 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:13:58.533 00:08:13 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.533 00:08:13 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:58.533 00:08:13 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:58.533 00:08:13 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:58.533 00:08:13 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.533 00:08:13 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:58.533 00:08:13 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:58.533 00:08:13 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:13 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:13 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:58.533 00:08:13 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:58.533 00:08:13 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:58.533 00:08:13 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:58.533 00:08:13 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:58.533 00:08:13 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:58.533 00:08:13 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:13 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.533 00:08:13 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:58.533 00:08:13 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:58.533 00:08:13 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:58.533 00:08:13 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:58.533 00:08:13 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:58.533 00:08:13 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:58.533 00:08:13 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:58.533 00:08:13 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:58.533 00:08:13 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:58.533 00:08:13 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:58.533 00:08:13 -- ftl/fio.sh@11 -- # declare -A suite 00:13:58.533 00:08:13 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:58.533 00:08:13 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:13:58.533 00:08:13 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:13:58.533 00:08:13 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:58.533 00:08:13 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:13:58.533 00:08:13 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:13:58.533 00:08:13 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:58.533 00:08:13 -- ftl/fio.sh@26 -- # uuid= 00:13:58.533 00:08:13 -- ftl/fio.sh@27 -- # timeout=240 00:13:58.533 00:08:13 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:13:58.533 00:08:13 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:13:58.533 00:08:13 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:13:58.533 00:08:13 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:13:58.533 00:08:13 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:58.533 00:08:13 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:58.533 00:08:13 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:13:58.533 00:08:13 -- ftl/fio.sh@45 -- # svcpid=82005 00:13:58.533 00:08:13 -- ftl/fio.sh@46 -- # waitforlisten 82005 00:13:58.533 00:08:13 -- common/autotest_common.sh@829 -- # '[' -z 82005 ']' 00:13:58.533 00:08:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.533 00:08:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.533 00:08:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.533 00:08:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.533 00:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:58.533 00:08:13 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:13:58.792 [2024-11-28 00:08:13.161825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:58.792 [2024-11-28 00:08:13.162100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82005 ] 00:13:58.792 [2024-11-28 00:08:13.307721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.792 [2024-11-28 00:08:13.336077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.792 [2024-11-28 00:08:13.336475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.792 [2024-11-28 00:08:13.336860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.792 [2024-11-28 00:08:13.336938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.727 00:08:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.727 00:08:13 -- common/autotest_common.sh@862 -- # return 0 00:13:59.727 00:08:13 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:13:59.727 00:08:13 -- ftl/common.sh@54 -- # local name=nvme0 00:13:59.727 00:08:13 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:13:59.727 00:08:13 -- ftl/common.sh@56 -- # local size=103424 00:13:59.727 00:08:13 -- ftl/common.sh@59 -- # local base_bdev 00:13:59.727 00:08:13 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:13:59.727 00:08:14 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:13:59.727 00:08:14 -- ftl/common.sh@62 -- # local base_size 00:13:59.727 00:08:14 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:13:59.727 00:08:14 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:13:59.727 00:08:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:59.727 00:08:14 -- common/autotest_common.sh@1369 -- # local bs 00:13:59.727 00:08:14 -- common/autotest_common.sh@1370 -- # local nb 00:13:59.727 00:08:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:13:59.986 00:08:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:59.986 { 00:13:59.986 "name": "nvme0n1", 00:13:59.986 "aliases": [ 00:13:59.986 "6bc50a87-115f-48f2-a9fc-aa3cdaf819a6" 00:13:59.986 ], 00:13:59.986 "product_name": "NVMe disk", 00:13:59.986 "block_size": 4096, 00:13:59.986 "num_blocks": 1310720, 00:13:59.986 "uuid": "6bc50a87-115f-48f2-a9fc-aa3cdaf819a6", 00:13:59.986 "assigned_rate_limits": { 00:13:59.986 "rw_ios_per_sec": 0, 00:13:59.986 "rw_mbytes_per_sec": 0, 00:13:59.986 "r_mbytes_per_sec": 0, 00:13:59.986 "w_mbytes_per_sec": 0 00:13:59.986 }, 00:13:59.986 "claimed": false, 00:13:59.986 "zoned": false, 00:13:59.986 "supported_io_types": { 00:13:59.986 "read": true, 00:13:59.986 "write": true, 00:13:59.986 "unmap": true, 00:13:59.986 "write_zeroes": true, 00:13:59.986 "flush": true, 00:13:59.986 "reset": true, 00:13:59.986 "compare": true, 00:13:59.986 "compare_and_write": false, 00:13:59.986 "abort": true, 00:13:59.986 "nvme_admin": true, 00:13:59.986 "nvme_io": true 00:13:59.986 }, 00:13:59.986 "driver_specific": { 00:13:59.986 "nvme": [ 00:13:59.986 { 00:13:59.986 "pci_address": "0000:00:07.0", 00:13:59.986 "trid": { 00:13:59.986 "trtype": "PCIe", 00:13:59.986 "traddr": "0000:00:07.0" 00:13:59.986 }, 00:13:59.986 "ctrlr_data": { 00:13:59.986 "cntlid": 0, 00:13:59.986 "vendor_id": "0x1b36", 00:13:59.986 "model_number": "QEMU NVMe Ctrl", 00:13:59.986 "serial_number": "12341", 00:13:59.986 "firmware_revision": "8.0.0", 00:13:59.986 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:59.986 "oacs": { 00:13:59.986 "security": 0, 00:13:59.986 "format": 1, 00:13:59.986 "firmware": 0, 00:13:59.986 "ns_manage": 1 00:13:59.986 }, 00:13:59.986 "multi_ctrlr": false, 00:13:59.986 "ana_reporting": false 00:13:59.986 }, 00:13:59.986 "vs": { 00:13:59.986 "nvme_version": "1.4" 00:13:59.986 }, 00:13:59.986 "ns_data": { 00:13:59.986 "id": 1, 00:13:59.986 "can_share": false 00:13:59.986 } 00:13:59.986 } 00:13:59.986 ], 00:13:59.986 "mp_policy": "active_passive" 00:13:59.986 } 00:13:59.986 } 00:13:59.986 ]' 00:13:59.986 00:08:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:59.986 00:08:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:59.986 00:08:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:59.986 00:08:14 -- common/autotest_common.sh@1373 -- # nb=1310720 00:13:59.986 00:08:14 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:13:59.986 00:08:14 -- common/autotest_common.sh@1377 -- # echo 5120 00:13:59.986 00:08:14 -- ftl/common.sh@63 -- # base_size=5120 00:13:59.986 00:08:14 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:13:59.986 00:08:14 -- ftl/common.sh@67 -- # clear_lvols 00:13:59.986 00:08:14 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:59.986 00:08:14 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:00.245 00:08:14 -- ftl/common.sh@28 -- # stores= 00:14:00.245 00:08:14 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:00.245 00:08:14 -- ftl/common.sh@68 -- # lvs=f6869b7c-9c5c-490d-8f59-ca603470ed21 00:14:00.245 00:08:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f6869b7c-9c5c-490d-8f59-ca603470ed21 00:14:00.504 00:08:15 -- ftl/fio.sh@48 -- # split_bdev=1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.504 00:08:15 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.504 00:08:15 -- ftl/common.sh@35 -- # local name=nvc0 00:14:00.504 00:08:15 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:00.504 00:08:15 -- ftl/common.sh@37 -- # local base_bdev=1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.504 00:08:15 -- ftl/common.sh@38 -- # local cache_size= 00:14:00.504 00:08:15 -- ftl/common.sh@41 -- # get_bdev_size 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.504 00:08:15 -- common/autotest_common.sh@1367 -- # local bdev_name=1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.504 00:08:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:00.504 00:08:15 -- common/autotest_common.sh@1369 -- # local bs 00:14:00.504 00:08:15 -- common/autotest_common.sh@1370 -- # local nb 00:14:00.504 00:08:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:00.762 00:08:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:00.762 { 00:14:00.762 "name": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:00.762 "aliases": [ 00:14:00.762 "lvs/nvme0n1p0" 00:14:00.762 ], 00:14:00.762 "product_name": "Logical Volume", 00:14:00.762 "block_size": 4096, 00:14:00.762 "num_blocks": 26476544, 00:14:00.762 "uuid": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:00.762 "assigned_rate_limits": { 00:14:00.762 "rw_ios_per_sec": 0, 00:14:00.762 "rw_mbytes_per_sec": 0, 00:14:00.762 "r_mbytes_per_sec": 0, 00:14:00.762 "w_mbytes_per_sec": 0 00:14:00.762 }, 00:14:00.762 "claimed": false, 00:14:00.762 "zoned": false, 00:14:00.762 "supported_io_types": { 00:14:00.762 "read": true, 00:14:00.762 "write": true, 00:14:00.762 "unmap": true, 00:14:00.762 "write_zeroes": true, 00:14:00.762 "flush": false, 00:14:00.762 "reset": true, 00:14:00.762 "compare": false, 00:14:00.762 "compare_and_write": false, 00:14:00.762 "abort": false, 00:14:00.762 "nvme_admin": false, 00:14:00.762 "nvme_io": false 00:14:00.762 }, 00:14:00.762 "driver_specific": { 00:14:00.762 "lvol": { 00:14:00.762 "lvol_store_uuid": "f6869b7c-9c5c-490d-8f59-ca603470ed21", 00:14:00.762 "base_bdev": "nvme0n1", 00:14:00.762 "thin_provision": true, 00:14:00.762 "snapshot": false, 00:14:00.762 "clone": false, 00:14:00.762 "esnap_clone": false 00:14:00.762 } 00:14:00.762 } 00:14:00.762 } 00:14:00.762 ]' 00:14:00.762 00:08:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:00.762 00:08:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:00.762 00:08:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:00.762 00:08:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:00.762 00:08:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:00.762 00:08:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:00.762 00:08:15 -- ftl/common.sh@41 -- # local base_size=5171 00:14:00.762 00:08:15 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:00.762 00:08:15 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:01.021 00:08:15 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:01.021 00:08:15 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:01.021 00:08:15 -- ftl/common.sh@48 -- # get_bdev_size 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.021 00:08:15 -- common/autotest_common.sh@1367 -- # local bdev_name=1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.021 00:08:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:01.021 00:08:15 -- common/autotest_common.sh@1369 -- # local bs 00:14:01.021 00:08:15 -- common/autotest_common.sh@1370 -- # local nb 00:14:01.021 00:08:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.280 00:08:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:01.280 { 00:14:01.280 "name": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:01.280 "aliases": [ 00:14:01.280 "lvs/nvme0n1p0" 00:14:01.280 ], 00:14:01.280 "product_name": "Logical Volume", 00:14:01.280 "block_size": 4096, 00:14:01.280 "num_blocks": 26476544, 00:14:01.280 "uuid": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:01.280 "assigned_rate_limits": { 00:14:01.280 "rw_ios_per_sec": 0, 00:14:01.280 "rw_mbytes_per_sec": 0, 00:14:01.280 "r_mbytes_per_sec": 0, 00:14:01.280 "w_mbytes_per_sec": 0 00:14:01.280 }, 00:14:01.280 "claimed": false, 00:14:01.280 "zoned": false, 00:14:01.280 "supported_io_types": { 00:14:01.280 "read": true, 00:14:01.280 "write": true, 00:14:01.280 "unmap": true, 00:14:01.280 "write_zeroes": true, 00:14:01.280 "flush": false, 00:14:01.280 "reset": true, 00:14:01.280 "compare": false, 00:14:01.280 "compare_and_write": false, 00:14:01.280 "abort": false, 00:14:01.280 "nvme_admin": false, 00:14:01.280 "nvme_io": false 00:14:01.280 }, 00:14:01.280 "driver_specific": { 00:14:01.280 "lvol": { 00:14:01.280 "lvol_store_uuid": "f6869b7c-9c5c-490d-8f59-ca603470ed21", 00:14:01.280 "base_bdev": "nvme0n1", 00:14:01.280 "thin_provision": true, 00:14:01.280 "snapshot": false, 00:14:01.280 "clone": false, 00:14:01.280 "esnap_clone": false 00:14:01.280 } 00:14:01.280 } 00:14:01.280 } 00:14:01.280 ]' 00:14:01.280 00:08:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:01.280 00:08:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:01.280 00:08:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:01.280 00:08:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:01.280 00:08:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:01.280 00:08:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:01.280 00:08:15 -- ftl/common.sh@48 -- # cache_size=5171 00:14:01.280 00:08:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:01.539 00:08:15 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:01.539 00:08:15 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:01.539 00:08:15 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:01.539 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:01.539 00:08:15 -- ftl/fio.sh@56 -- # get_bdev_size 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.539 00:08:15 -- common/autotest_common.sh@1367 -- # local bdev_name=1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.539 00:08:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:01.539 00:08:15 -- common/autotest_common.sh@1369 -- # local bs 00:14:01.539 00:08:15 -- common/autotest_common.sh@1370 -- # local nb 00:14:01.539 00:08:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 00:14:01.539 00:08:16 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:01.539 { 00:14:01.539 "name": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:01.539 "aliases": [ 00:14:01.539 "lvs/nvme0n1p0" 00:14:01.539 ], 00:14:01.539 "product_name": "Logical Volume", 00:14:01.539 "block_size": 4096, 00:14:01.539 "num_blocks": 26476544, 00:14:01.539 "uuid": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:01.539 "assigned_rate_limits": { 00:14:01.539 "rw_ios_per_sec": 0, 00:14:01.539 "rw_mbytes_per_sec": 0, 00:14:01.539 "r_mbytes_per_sec": 0, 00:14:01.539 "w_mbytes_per_sec": 0 00:14:01.539 }, 00:14:01.539 "claimed": false, 00:14:01.539 "zoned": false, 00:14:01.539 "supported_io_types": { 00:14:01.539 "read": true, 00:14:01.539 "write": true, 00:14:01.539 "unmap": true, 00:14:01.539 "write_zeroes": true, 00:14:01.540 "flush": false, 00:14:01.540 "reset": true, 00:14:01.540 "compare": false, 00:14:01.540 "compare_and_write": false, 00:14:01.540 "abort": false, 00:14:01.540 "nvme_admin": false, 00:14:01.540 "nvme_io": false 00:14:01.540 }, 00:14:01.540 "driver_specific": { 00:14:01.540 "lvol": { 00:14:01.540 "lvol_store_uuid": "f6869b7c-9c5c-490d-8f59-ca603470ed21", 00:14:01.540 "base_bdev": "nvme0n1", 00:14:01.540 "thin_provision": true, 00:14:01.540 "snapshot": false, 00:14:01.540 "clone": false, 00:14:01.540 "esnap_clone": false 00:14:01.540 } 00:14:01.540 } 00:14:01.540 } 00:14:01.540 ]' 00:14:01.800 00:08:16 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:01.800 00:08:16 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:01.800 00:08:16 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:01.800 00:08:16 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:01.800 00:08:16 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:01.800 00:08:16 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:01.800 00:08:16 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:01.800 00:08:16 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:01.800 00:08:16 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1cefe38e-ab7a-44d4-9c7d-b0113084a0c0 -c nvc0n1p0 --l2p_dram_limit 60 00:14:01.800 [2024-11-28 00:08:16.364000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.364121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:01.800 [2024-11-28 00:08:16.364140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:01.800 [2024-11-28 00:08:16.364155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.364222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.364231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:01.800 [2024-11-28 00:08:16.364240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:01.800 [2024-11-28 00:08:16.364246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.364281] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:01.800 [2024-11-28 00:08:16.364488] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:01.800 [2024-11-28 00:08:16.364501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.364507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:01.800 [2024-11-28 00:08:16.364515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:14:01.800 [2024-11-28 00:08:16.364521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.364603] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 555580ee-7209-498f-8841-89f5c42ae0c5 00:14:01.800 [2024-11-28 00:08:16.365564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.365585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:01.800 [2024-11-28 00:08:16.365593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:14:01.800 [2024-11-28 00:08:16.365601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.370175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.370202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:01.800 [2024-11-28 00:08:16.370209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.520 ms 00:14:01.800 [2024-11-28 00:08:16.370218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.370288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.370298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:01.800 [2024-11-28 00:08:16.370304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:14:01.800 [2024-11-28 00:08:16.370319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.370360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.370380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:01.800 [2024-11-28 00:08:16.370386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:01.800 [2024-11-28 00:08:16.370393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.370415] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:01.800 [2024-11-28 00:08:16.371642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.371664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:01.800 [2024-11-28 00:08:16.371672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:14:01.800 [2024-11-28 00:08:16.371678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.371711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.371717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:01.800 [2024-11-28 00:08:16.371726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:01.800 [2024-11-28 00:08:16.371732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.371752] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:01.800 [2024-11-28 00:08:16.371844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:01.800 [2024-11-28 00:08:16.371855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:01.800 [2024-11-28 00:08:16.371862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:01.800 [2024-11-28 00:08:16.371873] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:01.800 [2024-11-28 00:08:16.371879] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:01.800 [2024-11-28 00:08:16.371886] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:01.800 [2024-11-28 00:08:16.371892] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:01.800 [2024-11-28 00:08:16.371899] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:01.800 [2024-11-28 00:08:16.371904] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:01.800 [2024-11-28 00:08:16.371911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.371916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:01.800 [2024-11-28 00:08:16.371923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:14:01.800 [2024-11-28 00:08:16.371930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.371987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.800 [2024-11-28 00:08:16.372002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:01.800 [2024-11-28 00:08:16.372009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:01.800 [2024-11-28 00:08:16.372027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.800 [2024-11-28 00:08:16.372097] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:01.800 [2024-11-28 00:08:16.372104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:01.800 [2024-11-28 00:08:16.372112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:01.801 [2024-11-28 00:08:16.372131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:01.801 [2024-11-28 00:08:16.372149] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:01.801 [2024-11-28 00:08:16.372160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:01.801 [2024-11-28 00:08:16.372166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:01.801 [2024-11-28 00:08:16.372173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:01.801 [2024-11-28 00:08:16.372178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:01.801 [2024-11-28 00:08:16.372184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:01.801 [2024-11-28 00:08:16.372189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:01.801 [2024-11-28 00:08:16.372201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:01.801 [2024-11-28 00:08:16.372207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:01.801 [2024-11-28 00:08:16.372219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:01.801 [2024-11-28 00:08:16.372234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:01.801 [2024-11-28 00:08:16.372245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:01.801 [2024-11-28 00:08:16.372264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:01.801 [2024-11-28 00:08:16.372281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:01.801 [2024-11-28 00:08:16.372303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:01.801 [2024-11-28 00:08:16.372321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:01.801 [2024-11-28 00:08:16.372334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:01.801 [2024-11-28 00:08:16.372341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:01.801 [2024-11-28 00:08:16.372347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:01.801 [2024-11-28 00:08:16.372354] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:01.801 [2024-11-28 00:08:16.372360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:01.801 [2024-11-28 00:08:16.372377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:01.801 [2024-11-28 00:08:16.372392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:01.801 [2024-11-28 00:08:16.372398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:01.801 [2024-11-28 00:08:16.372405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:01.801 [2024-11-28 00:08:16.372412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:01.801 [2024-11-28 00:08:16.372419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:01.801 [2024-11-28 00:08:16.372425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:01.801 [2024-11-28 00:08:16.372438] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:01.801 [2024-11-28 00:08:16.372446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:01.801 [2024-11-28 00:08:16.372458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:01.801 [2024-11-28 00:08:16.372464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:01.801 [2024-11-28 00:08:16.372472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:01.801 [2024-11-28 00:08:16.372478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:01.801 [2024-11-28 00:08:16.372485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:01.801 [2024-11-28 00:08:16.372491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:01.801 [2024-11-28 00:08:16.372498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:01.801 [2024-11-28 00:08:16.372505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:01.801 [2024-11-28 00:08:16.372513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:01.801 [2024-11-28 00:08:16.372519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:01.801 [2024-11-28 00:08:16.372528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:01.801 [2024-11-28 00:08:16.372535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:01.801 [2024-11-28 00:08:16.372543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:01.801 [2024-11-28 00:08:16.372549] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:01.801 [2024-11-28 00:08:16.372557] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:01.801 [2024-11-28 00:08:16.372564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:01.801 [2024-11-28 00:08:16.372571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:01.801 [2024-11-28 00:08:16.372578] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:01.801 [2024-11-28 00:08:16.372585] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:01.801 [2024-11-28 00:08:16.372592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.372599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:01.801 [2024-11-28 00:08:16.372607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:14:01.801 [2024-11-28 00:08:16.372614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.377883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.377992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:01.801 [2024-11-28 00:08:16.378004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.221 ms 00:14:01.801 [2024-11-28 00:08:16.378011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.378083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.378092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:01.801 [2024-11-28 00:08:16.378098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:01.801 [2024-11-28 00:08:16.378107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.385863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.385890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:01.801 [2024-11-28 00:08:16.385897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.722 ms 00:14:01.801 [2024-11-28 00:08:16.385904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.385926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.385933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:01.801 [2024-11-28 00:08:16.385942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:14:01.801 [2024-11-28 00:08:16.385948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.386239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.386265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:01.801 [2024-11-28 00:08:16.386280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:14:01.801 [2024-11-28 00:08:16.386288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:01.801 [2024-11-28 00:08:16.386397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:01.801 [2024-11-28 00:08:16.386409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:01.801 [2024-11-28 00:08:16.386422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:14:01.801 [2024-11-28 00:08:16.386439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:02.060 [2024-11-28 00:08:16.400738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:02.060 [2024-11-28 00:08:16.400932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:02.060 [2024-11-28 00:08:16.400958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.275 ms 00:14:02.060 [2024-11-28 00:08:16.400972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:02.060 [2024-11-28 00:08:16.411009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:02.060 [2024-11-28 00:08:16.422786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:02.061 [2024-11-28 00:08:16.422823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:02.061 [2024-11-28 00:08:16.422832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.682 ms 00:14:02.061 [2024-11-28 00:08:16.422838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:02.061 [2024-11-28 00:08:16.468703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:02.061 [2024-11-28 00:08:16.468746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:02.061 [2024-11-28 00:08:16.468770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.827 ms 00:14:02.061 [2024-11-28 00:08:16.468779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:02.061 [2024-11-28 00:08:16.468834] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:02.061 [2024-11-28 00:08:16.468846] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:04.594 [2024-11-28 00:08:18.706190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.706249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:04.594 [2024-11-28 00:08:18.706265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2237.338 ms 00:14:04.594 [2024-11-28 00:08:18.706274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.594 [2024-11-28 00:08:18.706478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.706490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:04.594 [2024-11-28 00:08:18.706511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:14:04.594 [2024-11-28 00:08:18.706521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.594 [2024-11-28 00:08:18.709408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.709442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:04.594 [2024-11-28 00:08:18.709456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:14:04.594 [2024-11-28 00:08:18.709465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.594 [2024-11-28 00:08:18.711718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.711749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:04.594 [2024-11-28 00:08:18.711761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:14:04.594 [2024-11-28 00:08:18.711769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.594 [2024-11-28 00:08:18.711946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.711956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:04.594 [2024-11-28 00:08:18.711967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:14:04.594 [2024-11-28 00:08:18.711977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.594 [2024-11-28 00:08:18.730240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.594 [2024-11-28 00:08:18.730273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:04.594 [2024-11-28 00:08:18.730285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.237 ms 00:14:04.595 [2024-11-28 00:08:18.730301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.733742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.595 [2024-11-28 00:08:18.733774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:04.595 [2024-11-28 00:08:18.733787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.394 ms 00:14:04.595 [2024-11-28 00:08:18.733797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.737388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.595 [2024-11-28 00:08:18.737419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:04.595 [2024-11-28 00:08:18.737431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:14:04.595 [2024-11-28 00:08:18.737438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.740279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.595 [2024-11-28 00:08:18.740309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:04.595 [2024-11-28 00:08:18.740321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:14:04.595 [2024-11-28 00:08:18.740328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.740394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.595 [2024-11-28 00:08:18.740404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:04.595 [2024-11-28 00:08:18.740415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:14:04.595 [2024-11-28 00:08:18.740423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.740494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:04.595 [2024-11-28 00:08:18.740504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:04.595 [2024-11-28 00:08:18.740517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:04.595 [2024-11-28 00:08:18.740525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:04.595 [2024-11-28 00:08:18.741395] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2376.994 ms, result 0 00:14:04.595 { 00:14:04.595 "name": "ftl0", 00:14:04.595 "uuid": "555580ee-7209-498f-8841-89f5c42ae0c5" 00:14:04.595 } 00:14:04.595 00:08:18 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:04.595 00:08:18 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:04.595 00:08:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:04.595 00:08:18 -- common/autotest_common.sh@899 -- # local i 00:14:04.595 00:08:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:04.595 00:08:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:04.595 00:08:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:04.595 00:08:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:04.595 [ 00:14:04.595 { 00:14:04.595 "name": "ftl0", 00:14:04.595 "aliases": [ 00:14:04.595 "555580ee-7209-498f-8841-89f5c42ae0c5" 00:14:04.595 ], 00:14:04.595 "product_name": "FTL disk", 00:14:04.595 "block_size": 4096, 00:14:04.595 "num_blocks": 20971520, 00:14:04.595 "uuid": "555580ee-7209-498f-8841-89f5c42ae0c5", 00:14:04.595 "assigned_rate_limits": { 00:14:04.595 "rw_ios_per_sec": 0, 00:14:04.595 "rw_mbytes_per_sec": 0, 00:14:04.595 "r_mbytes_per_sec": 0, 00:14:04.595 "w_mbytes_per_sec": 0 00:14:04.595 }, 00:14:04.595 "claimed": false, 00:14:04.595 "zoned": false, 00:14:04.595 "supported_io_types": { 00:14:04.595 "read": true, 00:14:04.595 "write": true, 00:14:04.595 "unmap": true, 00:14:04.595 "write_zeroes": true, 00:14:04.595 "flush": true, 00:14:04.595 "reset": false, 00:14:04.595 "compare": false, 00:14:04.595 "compare_and_write": false, 00:14:04.595 "abort": false, 00:14:04.595 "nvme_admin": false, 00:14:04.595 "nvme_io": false 00:14:04.595 }, 00:14:04.595 "driver_specific": { 00:14:04.595 "ftl": { 00:14:04.595 "base_bdev": "1cefe38e-ab7a-44d4-9c7d-b0113084a0c0", 00:14:04.595 "cache": "nvc0n1p0" 00:14:04.595 } 00:14:04.595 } 00:14:04.595 } 00:14:04.595 ] 00:14:04.595 00:08:19 -- common/autotest_common.sh@905 -- # return 0 00:14:04.595 00:08:19 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:04.595 00:08:19 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:04.854 00:08:19 -- ftl/fio.sh@70 -- # echo ']}' 00:14:04.854 00:08:19 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:05.115 [2024-11-28 00:08:19.499223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.499269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:05.115 [2024-11-28 00:08:19.499282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:05.115 [2024-11-28 00:08:19.499291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.499323] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:05.115 [2024-11-28 00:08:19.499783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.499804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:05.115 [2024-11-28 00:08:19.499814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:14:05.115 [2024-11-28 00:08:19.499821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.500239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.500258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:05.115 [2024-11-28 00:08:19.500269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:14:05.115 [2024-11-28 00:08:19.500277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.503541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.503563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:05.115 [2024-11-28 00:08:19.503584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:14:05.115 [2024-11-28 00:08:19.503591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.509827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.509853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:05.115 [2024-11-28 00:08:19.509867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:14:05.115 [2024-11-28 00:08:19.509875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.511266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.511297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:05.115 [2024-11-28 00:08:19.511321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:14:05.115 [2024-11-28 00:08:19.511328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.514990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.515023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:05.115 [2024-11-28 00:08:19.515034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:14:05.115 [2024-11-28 00:08:19.515042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.515197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.515208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:05.115 [2024-11-28 00:08:19.515217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:14:05.115 [2024-11-28 00:08:19.515224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.516670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.516786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:05.115 [2024-11-28 00:08:19.516804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:14:05.115 [2024-11-28 00:08:19.516811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.517888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.517917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:05.115 [2024-11-28 00:08:19.517928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:14:05.115 [2024-11-28 00:08:19.517934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.518789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.518827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:05.115 [2024-11-28 00:08:19.518838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:14:05.115 [2024-11-28 00:08:19.518844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.519647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.115 [2024-11-28 00:08:19.519676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:05.115 [2024-11-28 00:08:19.519686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:14:05.115 [2024-11-28 00:08:19.519693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.115 [2024-11-28 00:08:19.519730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:05.115 [2024-11-28 00:08:19.519744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.519997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:05.115 [2024-11-28 00:08:19.520090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.520976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:05.116 [2024-11-28 00:08:19.521482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:05.116 [2024-11-28 00:08:19.521491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 555580ee-7209-498f-8841-89f5c42ae0c5 00:14:05.116 [2024-11-28 00:08:19.521499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:05.116 [2024-11-28 00:08:19.521523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:05.116 [2024-11-28 00:08:19.521530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:05.116 [2024-11-28 00:08:19.521541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:05.116 [2024-11-28 00:08:19.521548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:05.116 [2024-11-28 00:08:19.521558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:05.116 [2024-11-28 00:08:19.521565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:05.116 [2024-11-28 00:08:19.521573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:05.116 [2024-11-28 00:08:19.521579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:05.116 [2024-11-28 00:08:19.521587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.116 [2024-11-28 00:08:19.521595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:05.116 [2024-11-28 00:08:19.521604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:14:05.116 [2024-11-28 00:08:19.521611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.116 [2024-11-28 00:08:19.523029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.116 [2024-11-28 00:08:19.523045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:05.116 [2024-11-28 00:08:19.523066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:14:05.116 [2024-11-28 00:08:19.523073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.116 [2024-11-28 00:08:19.523148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:05.116 [2024-11-28 00:08:19.523156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:05.116 [2024-11-28 00:08:19.523165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:14:05.116 [2024-11-28 00:08:19.523180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.116 [2024-11-28 00:08:19.528205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.116 [2024-11-28 00:08:19.528318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:05.116 [2024-11-28 00:08:19.528334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.116 [2024-11-28 00:08:19.528342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.116 [2024-11-28 00:08:19.528409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.116 [2024-11-28 00:08:19.528418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:05.116 [2024-11-28 00:08:19.528427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.116 [2024-11-28 00:08:19.528434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.116 [2024-11-28 00:08:19.528494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.116 [2024-11-28 00:08:19.528504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:05.116 [2024-11-28 00:08:19.528524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.528530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.528567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.528575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:05.117 [2024-11-28 00:08:19.528584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.528591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.537605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.537642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:05.117 [2024-11-28 00:08:19.537653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.537660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:05.117 [2024-11-28 00:08:19.541189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:05.117 [2024-11-28 00:08:19.541272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:05.117 [2024-11-28 00:08:19.541342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:05.117 [2024-11-28 00:08:19.541539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:05.117 [2024-11-28 00:08:19.541610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:05.117 [2024-11-28 00:08:19.541685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:05.117 [2024-11-28 00:08:19.541751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:05.117 [2024-11-28 00:08:19.541769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:05.117 [2024-11-28 00:08:19.541777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:05.117 [2024-11-28 00:08:19.541943] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.686 ms, result 0 00:14:05.117 true 00:14:05.117 00:08:19 -- ftl/fio.sh@75 -- # killprocess 82005 00:14:05.117 00:08:19 -- common/autotest_common.sh@936 -- # '[' -z 82005 ']' 00:14:05.117 00:08:19 -- common/autotest_common.sh@940 -- # kill -0 82005 00:14:05.117 00:08:19 -- common/autotest_common.sh@941 -- # uname 00:14:05.117 00:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.117 00:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82005 00:14:05.117 killing process with pid 82005 00:14:05.117 00:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.117 00:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.117 00:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82005' 00:14:05.117 00:08:19 -- common/autotest_common.sh@955 -- # kill 82005 00:14:05.117 00:08:19 -- common/autotest_common.sh@960 -- # wait 82005 00:14:10.472 00:08:24 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:10.472 00:08:24 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:10.472 00:08:24 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:10.472 00:08:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.472 00:08:24 -- common/autotest_common.sh@10 -- # set +x 00:14:10.472 00:08:24 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:10.472 00:08:24 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:10.472 00:08:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:10.472 00:08:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:10.472 00:08:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:10.472 00:08:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.472 00:08:24 -- common/autotest_common.sh@1330 -- # shift 00:14:10.472 00:08:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:10.472 00:08:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:10.472 00:08:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:10.472 00:08:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:10.472 00:08:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.472 00:08:24 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:10.472 00:08:24 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:10.472 00:08:24 -- common/autotest_common.sh@1336 -- # break 00:14:10.472 00:08:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:10.472 00:08:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:10.472 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:10.472 fio-3.35 00:14:10.472 Starting 1 thread 00:14:14.658 00:14:14.658 test: (groupid=0, jobs=1): err= 0: pid=82196: Thu Nov 28 00:08:28 2024 00:14:14.658 read: IOPS=1141, BW=75.8MiB/s (79.5MB/s)(255MiB/3359msec) 00:14:14.658 slat (nsec): min=2905, max=18172, avg=4355.88, stdev=1762.14 00:14:14.658 clat (usec): min=258, max=980, avg=395.45, stdev=98.87 00:14:14.658 lat (usec): min=262, max=984, avg=399.81, stdev=99.28 00:14:14.658 clat percentiles (usec): 00:14:14.658 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:14:14.658 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 375], 60.00th=[ 412], 00:14:14.658 | 70.00th=[ 465], 80.00th=[ 498], 90.00th=[ 506], 95.00th=[ 529], 00:14:14.658 | 99.00th=[ 742], 99.50th=[ 807], 99.90th=[ 955], 99.95th=[ 963], 00:14:14.658 | 99.99th=[ 979] 00:14:14.658 write: IOPS=1148, BW=76.3MiB/s (80.0MB/s)(256MiB/3357msec); 0 zone resets 00:14:14.658 slat (usec): min=13, max=105, avg=19.62, stdev= 5.10 00:14:14.658 clat (usec): min=275, max=1817, avg=441.18, stdev=129.62 00:14:14.658 lat (usec): min=300, max=1847, avg=460.80, stdev=130.67 00:14:14.658 clat percentiles (usec): 00:14:14.658 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 334], 00:14:14.658 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 437], 60.00th=[ 465], 00:14:14.658 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 594], 00:14:14.658 | 99.00th=[ 906], 99.50th=[ 955], 99.90th=[ 1663], 99.95th=[ 1795], 00:14:14.658 | 99.99th=[ 1811] 00:14:14.658 bw ( KiB/s): min=69496, max=88672, per=96.87%, avg=75661.33, stdev=7086.46, samples=6 00:14:14.658 iops : min= 1022, max= 1304, avg=1112.67, stdev=104.21, samples=6 00:14:14.658 lat (usec) : 500=74.82%, 750=23.32%, 1000=1.66% 00:14:14.658 lat (msec) : 2=0.20% 00:14:14.658 cpu : usr=99.49%, sys=0.03%, ctx=9, majf=0, minf=1328 00:14:14.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.658 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.658 00:14:14.658 Run status group 0 (all jobs): 00:14:14.658 READ: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=255MiB (267MB), run=3359-3359msec 00:14:14.658 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=256MiB (269MB), run=3357-3357msec 00:14:14.916 ----------------------------------------------------- 00:14:14.916 Suppressions used: 00:14:14.916 count bytes template 00:14:14.917 1 5 /usr/src/fio/parse.c 00:14:14.917 1 8 libtcmalloc_minimal.so 00:14:14.917 1 904 libcrypto.so 00:14:14.917 ----------------------------------------------------- 00:14:14.917 00:14:14.917 00:08:29 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:14.917 00:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.917 00:08:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.917 00:08:29 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:14.917 00:08:29 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:14.917 00:08:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.917 00:08:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.917 00:08:29 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:14.917 00:08:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:14.917 00:08:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:14.917 00:08:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:14.917 00:08:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:14.917 00:08:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.917 00:08:29 -- common/autotest_common.sh@1330 -- # shift 00:14:14.917 00:08:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:14.917 00:08:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:14.917 00:08:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.917 00:08:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:14.917 00:08:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:14.917 00:08:29 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:14.917 00:08:29 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:14.917 00:08:29 -- common/autotest_common.sh@1336 -- # break 00:14:14.917 00:08:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:14.917 00:08:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:15.175 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:15.175 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:15.175 fio-3.35 00:14:15.175 Starting 2 threads 00:14:37.104 00:14:37.104 first_half: (groupid=0, jobs=1): err= 0: pid=82282: Thu Nov 28 00:08:50 2024 00:14:37.104 read: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(255MiB/20494msec) 00:14:37.104 slat (nsec): min=2980, max=24125, avg=3717.42, stdev=683.27 00:14:37.104 clat (usec): min=567, max=377857, avg=32187.81, stdev=17088.77 00:14:37.104 lat (usec): min=571, max=377862, avg=32191.53, stdev=17088.79 00:14:37.104 clat percentiles (msec): 00:14:37.104 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:14:37.104 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:14:37.104 | 70.00th=[ 30], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 47], 00:14:37.104 | 99.00th=[ 121], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 284], 00:14:37.104 | 99.99th=[ 359] 00:14:37.104 write: IOPS=3881, BW=15.2MiB/s (15.9MB/s)(256MiB/16884msec); 0 zone resets 00:14:37.104 slat (usec): min=3, max=2370, avg= 5.31, stdev=10.86 00:14:37.104 clat (usec): min=344, max=71415, avg=7964.59, stdev=12716.74 00:14:37.104 lat (usec): min=352, max=71419, avg=7969.91, stdev=12716.83 00:14:37.104 clat percentiles (usec): 00:14:37.104 | 1.00th=[ 619], 5.00th=[ 734], 10.00th=[ 824], 20.00th=[ 1020], 00:14:37.104 | 30.00th=[ 1975], 40.00th=[ 3163], 50.00th=[ 4146], 60.00th=[ 4817], 00:14:37.104 | 70.00th=[ 5473], 80.00th=[10290], 90.00th=[15139], 95.00th=[49546], 00:14:37.104 | 99.00th=[59507], 99.50th=[63701], 99.90th=[68682], 99.95th=[69731], 00:14:37.104 | 99.99th=[70779] 00:14:37.105 bw ( KiB/s): min= 928, max=51648, per=80.93%, avg=22791.65, stdev=15420.19, samples=23 00:14:37.105 iops : min= 232, max=12912, avg=5697.91, stdev=3855.05, samples=23 00:14:37.105 lat (usec) : 500=0.06%, 750=2.95%, 1000=6.73% 00:14:37.105 lat (msec) : 2=5.59%, 4=9.41%, 10=15.77%, 20=6.60%, 50=48.21% 00:14:37.105 lat (msec) : 100=3.70%, 250=0.95%, 500=0.03% 00:14:37.105 cpu : usr=99.48%, sys=0.15%, ctx=37, majf=0, minf=5579 00:14:37.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:37.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.105 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:37.105 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:37.105 second_half: (groupid=0, jobs=1): err= 0: pid=82283: Thu Nov 28 00:08:50 2024 00:14:37.105 read: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(255MiB/20610msec) 00:14:37.105 slat (nsec): min=2954, max=20198, avg=3766.17, stdev=678.48 00:14:37.105 clat (usec): min=588, max=394219, avg=31501.85, stdev=18068.85 00:14:37.105 lat (usec): min=592, max=394223, avg=31505.62, stdev=18068.88 00:14:37.105 clat percentiles (msec): 00:14:37.105 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 28], 00:14:37.105 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:14:37.105 | 70.00th=[ 30], 80.00th=[ 32], 90.00th=[ 34], 95.00th=[ 42], 00:14:37.105 | 99.00th=[ 127], 99.50th=[ 144], 99.90th=[ 201], 99.95th=[ 305], 00:14:37.105 | 99.99th=[ 384] 00:14:37.105 write: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(256MiB/18618msec); 0 zone resets 00:14:37.105 slat (usec): min=3, max=871, avg= 5.26, stdev= 5.58 00:14:37.105 clat (usec): min=340, max=71668, avg=8834.64, stdev=13536.40 00:14:37.105 lat (usec): min=350, max=71673, avg=8839.90, stdev=13536.48 00:14:37.105 clat percentiles (usec): 00:14:37.105 | 1.00th=[ 603], 5.00th=[ 742], 10.00th=[ 873], 20.00th=[ 1205], 00:14:37.105 | 30.00th=[ 2409], 40.00th=[ 3294], 50.00th=[ 4146], 60.00th=[ 4948], 00:14:37.105 | 70.00th=[ 5800], 80.00th=[11338], 90.00th=[22152], 95.00th=[50070], 00:14:37.105 | 99.00th=[60031], 99.50th=[64226], 99.90th=[70779], 99.95th=[70779], 00:14:37.105 | 99.99th=[71828] 00:14:37.105 bw ( KiB/s): min= 1040, max=69304, per=74.48%, avg=20974.68, stdev=15782.39, samples=25 00:14:37.105 iops : min= 260, max=17326, avg=5243.64, stdev=3945.56, samples=25 00:14:37.105 lat (usec) : 500=0.05%, 750=2.56%, 1000=4.64% 00:14:37.105 lat (msec) : 2=6.75%, 4=10.47%, 10=15.58%, 20=6.76%, 50=48.56% 00:14:37.105 lat (msec) : 100=3.79%, 250=0.81%, 500=0.04% 00:14:37.105 cpu : usr=99.45%, sys=0.14%, ctx=44, majf=0, minf=5551 00:14:37.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:37.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.105 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:37.105 issued rwts: total=65327,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:37.105 00:14:37.105 Run status group 0 (all jobs): 00:14:37.105 READ: bw=24.7MiB/s (25.9MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=510MiB (535MB), run=20494-20610msec 00:14:37.105 WRITE: bw=27.5MiB/s (28.8MB/s), 13.8MiB/s-15.2MiB/s (14.4MB/s-15.9MB/s), io=512MiB (537MB), run=16884-18618msec 00:14:38.041 ----------------------------------------------------- 00:14:38.041 Suppressions used: 00:14:38.041 count bytes template 00:14:38.041 2 10 /usr/src/fio/parse.c 00:14:38.041 2 192 /usr/src/fio/iolog.c 00:14:38.041 1 8 libtcmalloc_minimal.so 00:14:38.041 1 904 libcrypto.so 00:14:38.041 ----------------------------------------------------- 00:14:38.041 00:14:38.041 00:08:52 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:14:38.041 00:08:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.041 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.041 00:08:52 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:38.041 00:08:52 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:14:38.041 00:08:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.041 00:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:38.300 00:08:52 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:38.300 00:08:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:38.300 00:08:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:38.300 00:08:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.300 00:08:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:38.300 00:08:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.300 00:08:52 -- common/autotest_common.sh@1330 -- # shift 00:14:38.300 00:08:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:38.300 00:08:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.300 00:08:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.300 00:08:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:38.300 00:08:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:38.300 00:08:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.300 00:08:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.300 00:08:52 -- common/autotest_common.sh@1336 -- # break 00:14:38.300 00:08:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.300 00:08:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:38.300 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:38.300 fio-3.35 00:14:38.300 Starting 1 thread 00:14:53.174 00:14:53.174 test: (groupid=0, jobs=1): err= 0: pid=82557: Thu Nov 28 00:09:05 2024 00:14:53.174 read: IOPS=8569, BW=33.5MiB/s (35.1MB/s)(255MiB/7609msec) 00:14:53.174 slat (nsec): min=2967, max=18092, avg=3445.12, stdev=477.83 00:14:53.174 clat (usec): min=482, max=28855, avg=14930.68, stdev=1810.02 00:14:53.174 lat (usec): min=486, max=28859, avg=14934.12, stdev=1810.04 00:14:53.174 clat percentiles (usec): 00:14:53.174 | 1.00th=[12780], 5.00th=[13960], 10.00th=[14091], 20.00th=[14222], 00:14:53.174 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14484], 60.00th=[14615], 00:14:53.174 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15795], 95.00th=[19268], 00:14:53.174 | 99.00th=[23462], 99.50th=[24249], 99.90th=[25035], 99.95th=[25822], 00:14:53.174 | 99.99th=[28181] 00:14:53.174 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(256MiB/4202msec); 0 zone resets 00:14:53.174 slat (usec): min=4, max=431, avg= 5.94, stdev= 3.51 00:14:53.174 clat (usec): min=484, max=58810, avg=8165.07, stdev=9645.60 00:14:53.174 lat (usec): min=489, max=58815, avg=8171.01, stdev=9645.59 00:14:53.174 clat percentiles (usec): 00:14:53.174 | 1.00th=[ 603], 5.00th=[ 685], 10.00th=[ 791], 20.00th=[ 922], 00:14:53.174 | 30.00th=[ 1045], 40.00th=[ 1795], 50.00th=[ 5407], 60.00th=[ 6390], 00:14:53.174 | 70.00th=[ 7635], 80.00th=[12387], 90.00th=[26870], 95.00th=[28443], 00:14:53.174 | 99.00th=[35390], 99.50th=[38011], 99.90th=[54264], 99.95th=[55837], 00:14:53.174 | 99.99th=[57410] 00:14:53.174 bw ( KiB/s): min=23912, max=75816, per=93.38%, avg=58254.22, stdev=15865.40, samples=9 00:14:53.174 iops : min= 5978, max=18954, avg=14563.56, stdev=3966.35, samples=9 00:14:53.174 lat (usec) : 500=0.01%, 750=4.00%, 1000=9.34% 00:14:53.174 lat (msec) : 2=6.92%, 4=0.81%, 10=17.51%, 20=51.52%, 50=9.79% 00:14:53.174 lat (msec) : 100=0.11% 00:14:53.174 cpu : usr=99.33%, sys=0.23%, ctx=27, majf=0, minf=5577 00:14:53.174 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:53.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.174 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.174 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.174 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.174 00:14:53.174 Run status group 0 (all jobs): 00:14:53.174 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=255MiB (267MB), run=7609-7609msec 00:14:53.174 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=256MiB (268MB), run=4202-4202msec 00:14:53.174 ----------------------------------------------------- 00:14:53.174 Suppressions used: 00:14:53.174 count bytes template 00:14:53.174 1 5 /usr/src/fio/parse.c 00:14:53.174 2 192 /usr/src/fio/iolog.c 00:14:53.174 1 8 libtcmalloc_minimal.so 00:14:53.174 1 904 libcrypto.so 00:14:53.174 ----------------------------------------------------- 00:14:53.174 00:14:53.174 00:09:05 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:14:53.174 00:09:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.174 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.174 00:09:05 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:53.174 Remove shared memory files 00:14:53.174 00:09:05 -- ftl/fio.sh@85 -- # remove_shm 00:14:53.174 00:09:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:14:53.174 00:09:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:14:53.174 00:09:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:14:53.174 00:09:05 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68584 /dev/shm/spdk_tgt_trace.pid80955 00:14:53.174 00:09:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:14:53.174 00:09:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:14:53.174 ************************************ 00:14:53.174 END TEST ftl_fio_basic 00:14:53.175 ************************************ 00:14:53.175 00:14:53.175 real 0m52.985s 00:14:53.175 user 1m58.456s 00:14:53.175 sys 0m2.339s 00:14:53.175 00:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.175 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 00:09:05 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:53.175 00:09:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:53.175 00:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.175 00:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 ************************************ 00:14:53.175 START TEST ftl_bdevperf 00:14:53.175 ************************************ 00:14:53.175 00:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:53.175 * Looking for test storage... 00:14:53.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.175 00:09:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.175 00:09:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.175 00:09:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.175 00:09:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.175 00:09:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.175 00:09:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.175 00:09:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.175 00:09:06 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.175 00:09:06 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.175 00:09:06 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.175 00:09:06 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.175 00:09:06 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.175 00:09:06 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.175 00:09:06 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.175 00:09:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.175 00:09:06 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.175 00:09:06 -- scripts/common.sh@344 -- # : 1 00:14:53.175 00:09:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.175 00:09:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.175 00:09:06 -- scripts/common.sh@364 -- # decimal 1 00:14:53.175 00:09:06 -- scripts/common.sh@352 -- # local d=1 00:14:53.175 00:09:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.175 00:09:06 -- scripts/common.sh@354 -- # echo 1 00:14:53.175 00:09:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.175 00:09:06 -- scripts/common.sh@365 -- # decimal 2 00:14:53.175 00:09:06 -- scripts/common.sh@352 -- # local d=2 00:14:53.175 00:09:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.175 00:09:06 -- scripts/common.sh@354 -- # echo 2 00:14:53.175 00:09:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.175 00:09:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.175 00:09:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.175 00:09:06 -- scripts/common.sh@367 -- # return 0 00:14:53.175 00:09:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.175 00:09:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.175 --rc genhtml_branch_coverage=1 00:14:53.175 --rc genhtml_function_coverage=1 00:14:53.175 --rc genhtml_legend=1 00:14:53.175 --rc geninfo_all_blocks=1 00:14:53.175 --rc geninfo_unexecuted_blocks=1 00:14:53.175 00:14:53.175 ' 00:14:53.175 00:09:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.175 --rc genhtml_branch_coverage=1 00:14:53.175 --rc genhtml_function_coverage=1 00:14:53.175 --rc genhtml_legend=1 00:14:53.175 --rc geninfo_all_blocks=1 00:14:53.175 --rc geninfo_unexecuted_blocks=1 00:14:53.175 00:14:53.175 ' 00:14:53.175 00:09:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.175 --rc genhtml_branch_coverage=1 00:14:53.175 --rc genhtml_function_coverage=1 00:14:53.175 --rc genhtml_legend=1 00:14:53.175 --rc geninfo_all_blocks=1 00:14:53.175 --rc geninfo_unexecuted_blocks=1 00:14:53.175 00:14:53.175 ' 00:14:53.175 00:09:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.175 --rc genhtml_branch_coverage=1 00:14:53.175 --rc genhtml_function_coverage=1 00:14:53.175 --rc genhtml_legend=1 00:14:53.175 --rc geninfo_all_blocks=1 00:14:53.175 --rc geninfo_unexecuted_blocks=1 00:14:53.175 00:14:53.175 ' 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:53.175 00:09:06 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:14:53.175 00:09:06 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.175 00:09:06 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.175 00:09:06 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:53.175 00:09:06 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:53.175 00:09:06 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.175 00:09:06 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:53.175 00:09:06 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:53.175 00:09:06 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.175 00:09:06 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.175 00:09:06 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:53.175 00:09:06 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:53.175 00:09:06 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.175 00:09:06 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.175 00:09:06 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:53.175 00:09:06 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:53.175 00:09:06 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.175 00:09:06 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.175 00:09:06 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:53.175 00:09:06 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:53.175 00:09:06 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.175 00:09:06 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.175 00:09:06 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.175 00:09:06 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.175 00:09:06 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:53.175 00:09:06 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:53.175 00:09:06 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.175 00:09:06 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@13 -- # use_append= 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@15 -- # timeout=240 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:14:53.175 00:09:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.175 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=82768 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@22 -- # waitforlisten 82768 00:14:53.175 00:09:06 -- common/autotest_common.sh@829 -- # '[' -z 82768 ']' 00:14:53.175 00:09:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.175 00:09:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:14:53.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.175 00:09:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.175 00:09:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.175 00:09:06 -- common/autotest_common.sh@10 -- # set +x 00:14:53.175 [2024-11-28 00:09:06.180183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.175 [2024-11-28 00:09:06.180413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82768 ] 00:14:53.175 [2024-11-28 00:09:06.328222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.175 [2024-11-28 00:09:06.358836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.175 00:09:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.175 00:09:06 -- common/autotest_common.sh@862 -- # return 0 00:14:53.175 00:09:06 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:53.175 00:09:06 -- ftl/common.sh@54 -- # local name=nvme0 00:14:53.175 00:09:06 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:53.175 00:09:06 -- ftl/common.sh@56 -- # local size=103424 00:14:53.175 00:09:06 -- ftl/common.sh@59 -- # local base_bdev 00:14:53.175 00:09:06 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:53.175 00:09:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:53.175 00:09:07 -- ftl/common.sh@62 -- # local base_size 00:14:53.175 00:09:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:53.175 00:09:07 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:53.175 00:09:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:53.175 00:09:07 -- common/autotest_common.sh@1369 -- # local bs 00:14:53.175 00:09:07 -- common/autotest_common.sh@1370 -- # local nb 00:14:53.175 00:09:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:53.175 00:09:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:53.175 { 00:14:53.175 "name": "nvme0n1", 00:14:53.175 "aliases": [ 00:14:53.176 "a13fd2a2-3a55-4570-8253-e0c312ea0564" 00:14:53.176 ], 00:14:53.176 "product_name": "NVMe disk", 00:14:53.176 "block_size": 4096, 00:14:53.176 "num_blocks": 1310720, 00:14:53.176 "uuid": "a13fd2a2-3a55-4570-8253-e0c312ea0564", 00:14:53.176 "assigned_rate_limits": { 00:14:53.176 "rw_ios_per_sec": 0, 00:14:53.176 "rw_mbytes_per_sec": 0, 00:14:53.176 "r_mbytes_per_sec": 0, 00:14:53.176 "w_mbytes_per_sec": 0 00:14:53.176 }, 00:14:53.176 "claimed": true, 00:14:53.176 "claim_type": "read_many_write_one", 00:14:53.176 "zoned": false, 00:14:53.176 "supported_io_types": { 00:14:53.176 "read": true, 00:14:53.176 "write": true, 00:14:53.176 "unmap": true, 00:14:53.176 "write_zeroes": true, 00:14:53.176 "flush": true, 00:14:53.176 "reset": true, 00:14:53.176 "compare": true, 00:14:53.176 "compare_and_write": false, 00:14:53.176 "abort": true, 00:14:53.176 "nvme_admin": true, 00:14:53.176 "nvme_io": true 00:14:53.176 }, 00:14:53.176 "driver_specific": { 00:14:53.176 "nvme": [ 00:14:53.176 { 00:14:53.176 "pci_address": "0000:00:07.0", 00:14:53.176 "trid": { 00:14:53.176 "trtype": "PCIe", 00:14:53.176 "traddr": "0000:00:07.0" 00:14:53.176 }, 00:14:53.176 "ctrlr_data": { 00:14:53.176 "cntlid": 0, 00:14:53.176 "vendor_id": "0x1b36", 00:14:53.176 "model_number": "QEMU NVMe Ctrl", 00:14:53.176 "serial_number": "12341", 00:14:53.176 "firmware_revision": "8.0.0", 00:14:53.176 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:53.176 "oacs": { 00:14:53.176 "security": 0, 00:14:53.176 "format": 1, 00:14:53.176 "firmware": 0, 00:14:53.176 "ns_manage": 1 00:14:53.176 }, 00:14:53.176 "multi_ctrlr": false, 00:14:53.176 "ana_reporting": false 00:14:53.176 }, 00:14:53.176 "vs": { 00:14:53.176 "nvme_version": "1.4" 00:14:53.176 }, 00:14:53.176 "ns_data": { 00:14:53.176 "id": 1, 00:14:53.176 "can_share": false 00:14:53.176 } 00:14:53.176 } 00:14:53.176 ], 00:14:53.176 "mp_policy": "active_passive" 00:14:53.176 } 00:14:53.176 } 00:14:53.176 ]' 00:14:53.176 00:09:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:53.176 00:09:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:53.176 00:09:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:53.176 00:09:07 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:53.176 00:09:07 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:53.176 00:09:07 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:53.176 00:09:07 -- ftl/common.sh@63 -- # base_size=5120 00:14:53.176 00:09:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:53.176 00:09:07 -- ftl/common.sh@67 -- # clear_lvols 00:14:53.176 00:09:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:53.176 00:09:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:53.176 00:09:07 -- ftl/common.sh@28 -- # stores=f6869b7c-9c5c-490d-8f59-ca603470ed21 00:14:53.176 00:09:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:14:53.176 00:09:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6869b7c-9c5c-490d-8f59-ca603470ed21 00:14:53.434 00:09:07 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:53.696 00:09:08 -- ftl/common.sh@68 -- # lvs=3e3dd00e-77e1-4e19-a3b4-86c8589981e5 00:14:53.696 00:09:08 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3e3dd00e-77e1-4e19-a3b4-86c8589981e5 00:14:53.696 00:09:08 -- ftl/bdevperf.sh@23 -- # split_bdev=3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.696 00:09:08 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.696 00:09:08 -- ftl/common.sh@35 -- # local name=nvc0 00:14:53.696 00:09:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:53.696 00:09:08 -- ftl/common.sh@37 -- # local base_bdev=3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.696 00:09:08 -- ftl/common.sh@38 -- # local cache_size= 00:14:53.696 00:09:08 -- ftl/common.sh@41 -- # get_bdev_size 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.696 00:09:08 -- common/autotest_common.sh@1367 -- # local bdev_name=3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.696 00:09:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:53.696 00:09:08 -- common/autotest_common.sh@1369 -- # local bs 00:14:53.696 00:09:08 -- common/autotest_common.sh@1370 -- # local nb 00:14:53.696 00:09:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:53.961 00:09:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:53.961 { 00:14:53.961 "name": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:53.961 "aliases": [ 00:14:53.961 "lvs/nvme0n1p0" 00:14:53.961 ], 00:14:53.961 "product_name": "Logical Volume", 00:14:53.961 "block_size": 4096, 00:14:53.961 "num_blocks": 26476544, 00:14:53.961 "uuid": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:53.961 "assigned_rate_limits": { 00:14:53.961 "rw_ios_per_sec": 0, 00:14:53.961 "rw_mbytes_per_sec": 0, 00:14:53.961 "r_mbytes_per_sec": 0, 00:14:53.961 "w_mbytes_per_sec": 0 00:14:53.961 }, 00:14:53.961 "claimed": false, 00:14:53.961 "zoned": false, 00:14:53.961 "supported_io_types": { 00:14:53.961 "read": true, 00:14:53.961 "write": true, 00:14:53.961 "unmap": true, 00:14:53.961 "write_zeroes": true, 00:14:53.961 "flush": false, 00:14:53.961 "reset": true, 00:14:53.961 "compare": false, 00:14:53.961 "compare_and_write": false, 00:14:53.961 "abort": false, 00:14:53.961 "nvme_admin": false, 00:14:53.961 "nvme_io": false 00:14:53.961 }, 00:14:53.961 "driver_specific": { 00:14:53.961 "lvol": { 00:14:53.961 "lvol_store_uuid": "3e3dd00e-77e1-4e19-a3b4-86c8589981e5", 00:14:53.961 "base_bdev": "nvme0n1", 00:14:53.961 "thin_provision": true, 00:14:53.961 "snapshot": false, 00:14:53.961 "clone": false, 00:14:53.961 "esnap_clone": false 00:14:53.961 } 00:14:53.961 } 00:14:53.961 } 00:14:53.961 ]' 00:14:53.961 00:09:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:53.961 00:09:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:53.961 00:09:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:53.961 00:09:08 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:53.961 00:09:08 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:53.961 00:09:08 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:53.961 00:09:08 -- ftl/common.sh@41 -- # local base_size=5171 00:14:53.961 00:09:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:53.961 00:09:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:54.220 00:09:08 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:54.220 00:09:08 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:54.220 00:09:08 -- ftl/common.sh@48 -- # get_bdev_size 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.220 00:09:08 -- common/autotest_common.sh@1367 -- # local bdev_name=3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.220 00:09:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:54.220 00:09:08 -- common/autotest_common.sh@1369 -- # local bs 00:14:54.220 00:09:08 -- common/autotest_common.sh@1370 -- # local nb 00:14:54.220 00:09:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.478 00:09:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:54.478 { 00:14:54.478 "name": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:54.478 "aliases": [ 00:14:54.478 "lvs/nvme0n1p0" 00:14:54.478 ], 00:14:54.478 "product_name": "Logical Volume", 00:14:54.478 "block_size": 4096, 00:14:54.478 "num_blocks": 26476544, 00:14:54.478 "uuid": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:54.478 "assigned_rate_limits": { 00:14:54.478 "rw_ios_per_sec": 0, 00:14:54.478 "rw_mbytes_per_sec": 0, 00:14:54.478 "r_mbytes_per_sec": 0, 00:14:54.478 "w_mbytes_per_sec": 0 00:14:54.478 }, 00:14:54.478 "claimed": false, 00:14:54.478 "zoned": false, 00:14:54.478 "supported_io_types": { 00:14:54.478 "read": true, 00:14:54.478 "write": true, 00:14:54.478 "unmap": true, 00:14:54.478 "write_zeroes": true, 00:14:54.478 "flush": false, 00:14:54.478 "reset": true, 00:14:54.478 "compare": false, 00:14:54.478 "compare_and_write": false, 00:14:54.478 "abort": false, 00:14:54.478 "nvme_admin": false, 00:14:54.478 "nvme_io": false 00:14:54.478 }, 00:14:54.478 "driver_specific": { 00:14:54.478 "lvol": { 00:14:54.478 "lvol_store_uuid": "3e3dd00e-77e1-4e19-a3b4-86c8589981e5", 00:14:54.478 "base_bdev": "nvme0n1", 00:14:54.478 "thin_provision": true, 00:14:54.478 "snapshot": false, 00:14:54.478 "clone": false, 00:14:54.478 "esnap_clone": false 00:14:54.478 } 00:14:54.478 } 00:14:54.478 } 00:14:54.478 ]' 00:14:54.478 00:09:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:54.478 00:09:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:54.478 00:09:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:54.478 00:09:08 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:54.478 00:09:08 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:54.478 00:09:08 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:54.478 00:09:08 -- ftl/common.sh@48 -- # cache_size=5171 00:14:54.478 00:09:08 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:54.737 00:09:09 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:14:54.737 00:09:09 -- ftl/bdevperf.sh@26 -- # get_bdev_size 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.737 00:09:09 -- common/autotest_common.sh@1367 -- # local bdev_name=3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.737 00:09:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:54.737 00:09:09 -- common/autotest_common.sh@1369 -- # local bs 00:14:54.737 00:09:09 -- common/autotest_common.sh@1370 -- # local nb 00:14:54.737 00:09:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fd91c52-9f58-465a-9304-50a4fd4a3253 00:14:54.996 00:09:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:54.996 { 00:14:54.996 "name": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:54.996 "aliases": [ 00:14:54.996 "lvs/nvme0n1p0" 00:14:54.996 ], 00:14:54.996 "product_name": "Logical Volume", 00:14:54.996 "block_size": 4096, 00:14:54.996 "num_blocks": 26476544, 00:14:54.996 "uuid": "3fd91c52-9f58-465a-9304-50a4fd4a3253", 00:14:54.996 "assigned_rate_limits": { 00:14:54.996 "rw_ios_per_sec": 0, 00:14:54.996 "rw_mbytes_per_sec": 0, 00:14:54.996 "r_mbytes_per_sec": 0, 00:14:54.996 "w_mbytes_per_sec": 0 00:14:54.996 }, 00:14:54.996 "claimed": false, 00:14:54.996 "zoned": false, 00:14:54.996 "supported_io_types": { 00:14:54.996 "read": true, 00:14:54.996 "write": true, 00:14:54.996 "unmap": true, 00:14:54.996 "write_zeroes": true, 00:14:54.996 "flush": false, 00:14:54.996 "reset": true, 00:14:54.996 "compare": false, 00:14:54.996 "compare_and_write": false, 00:14:54.996 "abort": false, 00:14:54.996 "nvme_admin": false, 00:14:54.996 "nvme_io": false 00:14:54.996 }, 00:14:54.996 "driver_specific": { 00:14:54.996 "lvol": { 00:14:54.996 "lvol_store_uuid": "3e3dd00e-77e1-4e19-a3b4-86c8589981e5", 00:14:54.996 "base_bdev": "nvme0n1", 00:14:54.996 "thin_provision": true, 00:14:54.996 "snapshot": false, 00:14:54.996 "clone": false, 00:14:54.997 "esnap_clone": false 00:14:54.997 } 00:14:54.997 } 00:14:54.997 } 00:14:54.997 ]' 00:14:54.997 00:09:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:54.997 00:09:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:54.997 00:09:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:54.997 00:09:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:54.997 00:09:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:54.997 00:09:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:54.997 00:09:09 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:14:54.997 00:09:09 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3fd91c52-9f58-465a-9304-50a4fd4a3253 -c nvc0n1p0 --l2p_dram_limit 20 00:14:55.257 [2024-11-28 00:09:09.631164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.631318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:55.257 [2024-11-28 00:09:09.631339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:55.257 [2024-11-28 00:09:09.631346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.631414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.631422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:55.257 [2024-11-28 00:09:09.631432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:14:55.257 [2024-11-28 00:09:09.631439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.631457] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:55.257 [2024-11-28 00:09:09.631655] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:55.257 [2024-11-28 00:09:09.631666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.631672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:55.257 [2024-11-28 00:09:09.631680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:14:55.257 [2024-11-28 00:09:09.631687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.631783] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5365d95f-2b75-40c7-b686-01d3645e9f21 00:14:55.257 [2024-11-28 00:09:09.632700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.632731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:55.257 [2024-11-28 00:09:09.632740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:14:55.257 [2024-11-28 00:09:09.632748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.637308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.637336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:55.257 [2024-11-28 00:09:09.637344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.531 ms 00:14:55.257 [2024-11-28 00:09:09.637353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.637423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.637433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:55.257 [2024-11-28 00:09:09.637439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:14:55.257 [2024-11-28 00:09:09.637449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.637489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.637500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:55.257 [2024-11-28 00:09:09.637506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:55.257 [2024-11-28 00:09:09.637518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.637535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:55.257 [2024-11-28 00:09:09.638761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.638857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:55.257 [2024-11-28 00:09:09.638872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:14:55.257 [2024-11-28 00:09:09.638882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.638909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.638915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:55.257 [2024-11-28 00:09:09.638924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:55.257 [2024-11-28 00:09:09.638930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.638942] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:55.257 [2024-11-28 00:09:09.639037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:55.257 [2024-11-28 00:09:09.639051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:55.257 [2024-11-28 00:09:09.639059] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:55.257 [2024-11-28 00:09:09.639068] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639075] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639082] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:55.257 [2024-11-28 00:09:09.639087] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:55.257 [2024-11-28 00:09:09.639094] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:55.257 [2024-11-28 00:09:09.639102] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:55.257 [2024-11-28 00:09:09.639108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.639115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:55.257 [2024-11-28 00:09:09.639125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:14:55.257 [2024-11-28 00:09:09.639130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.639178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.257 [2024-11-28 00:09:09.639184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:55.257 [2024-11-28 00:09:09.639192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:55.257 [2024-11-28 00:09:09.639197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.257 [2024-11-28 00:09:09.639254] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:55.257 [2024-11-28 00:09:09.639261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:55.257 [2024-11-28 00:09:09.639270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:55.257 [2024-11-28 00:09:09.639288] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:55.257 [2024-11-28 00:09:09.639305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:55.257 [2024-11-28 00:09:09.639316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:55.257 [2024-11-28 00:09:09.639321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:55.257 [2024-11-28 00:09:09.639329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:55.257 [2024-11-28 00:09:09.639334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:55.257 [2024-11-28 00:09:09.639340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:55.257 [2024-11-28 00:09:09.639345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:55.257 [2024-11-28 00:09:09.639359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:55.257 [2024-11-28 00:09:09.639375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:55.257 [2024-11-28 00:09:09.639386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:55.257 [2024-11-28 00:09:09.639390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:55.257 [2024-11-28 00:09:09.639403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:55.257 [2024-11-28 00:09:09.639420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:55.257 [2024-11-28 00:09:09.639436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:55.257 [2024-11-28 00:09:09.639455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.257 [2024-11-28 00:09:09.639468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:55.257 [2024-11-28 00:09:09.639473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:55.257 [2024-11-28 00:09:09.639479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:55.258 [2024-11-28 00:09:09.639484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:55.258 [2024-11-28 00:09:09.639490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:55.258 [2024-11-28 00:09:09.639495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:55.258 [2024-11-28 00:09:09.639501] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:55.258 [2024-11-28 00:09:09.639507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:55.258 [2024-11-28 00:09:09.639514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:55.258 [2024-11-28 00:09:09.639522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.258 [2024-11-28 00:09:09.639530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:55.258 [2024-11-28 00:09:09.639535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:55.258 [2024-11-28 00:09:09.639541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:55.258 [2024-11-28 00:09:09.639546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:55.258 [2024-11-28 00:09:09.639554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:55.258 [2024-11-28 00:09:09.639559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:55.258 [2024-11-28 00:09:09.639566] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:55.258 [2024-11-28 00:09:09.639573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:55.258 [2024-11-28 00:09:09.639583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:55.258 [2024-11-28 00:09:09.639589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:55.258 [2024-11-28 00:09:09.639596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:55.258 [2024-11-28 00:09:09.639601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:55.258 [2024-11-28 00:09:09.639608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:55.258 [2024-11-28 00:09:09.639614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:55.258 [2024-11-28 00:09:09.639620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:55.258 [2024-11-28 00:09:09.639626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:55.258 [2024-11-28 00:09:09.639633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:55.258 [2024-11-28 00:09:09.639639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:55.258 [2024-11-28 00:09:09.639646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:55.258 [2024-11-28 00:09:09.639652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:55.258 [2024-11-28 00:09:09.639660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:55.258 [2024-11-28 00:09:09.639665] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:55.258 [2024-11-28 00:09:09.639673] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:55.258 [2024-11-28 00:09:09.639681] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:55.258 [2024-11-28 00:09:09.639688] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:55.258 [2024-11-28 00:09:09.639693] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:55.258 [2024-11-28 00:09:09.639700] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:55.258 [2024-11-28 00:09:09.639705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.639714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:55.258 [2024-11-28 00:09:09.639722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:14:55.258 [2024-11-28 00:09:09.639728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.644891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.644923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:55.258 [2024-11-28 00:09:09.644930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:14:55.258 [2024-11-28 00:09:09.644940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.645002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.645009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:55.258 [2024-11-28 00:09:09.645016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:55.258 [2024-11-28 00:09:09.645023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.665778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.665995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:55.258 [2024-11-28 00:09:09.666025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.730 ms 00:14:55.258 [2024-11-28 00:09:09.666043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.666099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.666120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:55.258 [2024-11-28 00:09:09.666135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:55.258 [2024-11-28 00:09:09.666150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.666614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.666658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:55.258 [2024-11-28 00:09:09.666677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:14:55.258 [2024-11-28 00:09:09.666695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.666887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.666920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:55.258 [2024-11-28 00:09:09.666945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:14:55.258 [2024-11-28 00:09:09.666966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.672877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.672990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:55.258 [2024-11-28 00:09:09.673006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.884 ms 00:14:55.258 [2024-11-28 00:09:09.673020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.681265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:14:55.258 [2024-11-28 00:09:09.685450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.685472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:55.258 [2024-11-28 00:09:09.685482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.368 ms 00:14:55.258 [2024-11-28 00:09:09.685489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.742977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.258 [2024-11-28 00:09:09.743025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:55.258 [2024-11-28 00:09:09.743039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.450 ms 00:14:55.258 [2024-11-28 00:09:09.743050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.258 [2024-11-28 00:09:09.743072] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:55.258 [2024-11-28 00:09:09.743083] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:57.790 [2024-11-28 00:09:12.114722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.114787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:57.790 [2024-11-28 00:09:12.114803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2371.630 ms 00:14:57.790 [2024-11-28 00:09:12.114812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.114994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.115011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:57.790 [2024-11-28 00:09:12.115028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:14:57.790 [2024-11-28 00:09:12.115035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.118390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.118420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:57.790 [2024-11-28 00:09:12.118439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:14:57.790 [2024-11-28 00:09:12.118447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.121434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.121559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:57.790 [2024-11-28 00:09:12.121579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:14:57.790 [2024-11-28 00:09:12.121586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.121754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.121763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:57.790 [2024-11-28 00:09:12.121775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:14:57.790 [2024-11-28 00:09:12.121781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.144595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.144717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:57.790 [2024-11-28 00:09:12.144741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.793 ms 00:14:57.790 [2024-11-28 00:09:12.144749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.149422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.149456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:57.790 [2024-11-28 00:09:12.149471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:14:57.790 [2024-11-28 00:09:12.149480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.150678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.150707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:57.790 [2024-11-28 00:09:12.150719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:14:57.790 [2024-11-28 00:09:12.150726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.154434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.154467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:57.790 [2024-11-28 00:09:12.154479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:14:57.790 [2024-11-28 00:09:12.154486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.154522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.154531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:57.790 [2024-11-28 00:09:12.154541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:57.790 [2024-11-28 00:09:12.154548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.154613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:57.790 [2024-11-28 00:09:12.154622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:57.790 [2024-11-28 00:09:12.154633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:14:57.790 [2024-11-28 00:09:12.154640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:57.790 [2024-11-28 00:09:12.155436] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2523.882 ms, result 0 00:14:57.790 { 00:14:57.790 "name": "ftl0", 00:14:57.790 "uuid": "5365d95f-2b75-40c7-b686-01d3645e9f21" 00:14:57.790 } 00:14:57.790 00:09:12 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:14:57.790 00:09:12 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:14:57.791 00:09:12 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:14:57.791 00:09:12 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:14:58.049 [2024-11-28 00:09:12.441870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:58.049 I/O size of 69632 is greater than zero copy threshold (65536). 00:14:58.049 Zero copy mechanism will not be used. 00:14:58.049 Running I/O for 4 seconds... 00:15:02.236 00:15:02.236 Latency(us) 00:15:02.236 [2024-11-28T00:09:16.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.236 [2024-11-28T00:09:16.838Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:02.236 ftl0 : 4.00 1879.70 124.82 0.00 0.00 554.73 154.39 4058.19 00:15:02.236 [2024-11-28T00:09:16.838Z] =================================================================================================================== 00:15:02.236 [2024-11-28T00:09:16.838Z] Total : 1879.70 124.82 0.00 0.00 554.73 154.39 4058.19 00:15:02.236 [2024-11-28 00:09:16.448474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:02.236 0 00:15:02.236 00:09:16 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:02.236 [2024-11-28 00:09:16.554487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:02.236 Running I/O for 4 seconds... 00:15:06.421 00:15:06.421 Latency(us) 00:15:06.421 [2024-11-28T00:09:21.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.421 [2024-11-28T00:09:21.023Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.421 ftl0 : 4.03 6583.46 25.72 0.00 0.00 19374.59 247.34 43354.58 00:15:06.421 [2024-11-28T00:09:21.023Z] =================================================================================================================== 00:15:06.421 [2024-11-28T00:09:21.023Z] Total : 6583.46 25.72 0.00 0.00 19374.59 0.00 43354.58 00:15:06.421 [2024-11-28 00:09:20.587383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:06.421 0 00:15:06.421 00:09:20 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:06.421 [2024-11-28 00:09:20.687329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:06.421 Running I/O for 4 seconds... 00:15:10.612 00:15:10.612 Latency(us) 00:15:10.612 [2024-11-28T00:09:25.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.612 [2024-11-28T00:09:25.214Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:10.612 Verification LBA range: start 0x0 length 0x1400000 00:15:10.612 ftl0 : 4.01 10305.63 40.26 0.00 0.00 12393.56 166.99 25609.45 00:15:10.612 [2024-11-28T00:09:25.214Z] =================================================================================================================== 00:15:10.612 [2024-11-28T00:09:25.214Z] Total : 10305.63 40.26 0.00 0.00 12393.56 0.00 25609.45 00:15:10.612 [2024-11-28 00:09:24.700769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:10.612 0 00:15:10.612 00:09:24 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:10.612 [2024-11-28 00:09:24.889063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.612 [2024-11-28 00:09:24.889109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:10.612 [2024-11-28 00:09:24.889123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:10.612 [2024-11-28 00:09:24.889131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.612 [2024-11-28 00:09:24.889156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:10.612 [2024-11-28 00:09:24.889611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.612 [2024-11-28 00:09:24.889631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:10.613 [2024-11-28 00:09:24.889641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:15:10.613 [2024-11-28 00:09:24.889650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:24.892140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:24.892175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:10.613 [2024-11-28 00:09:24.892184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:15:10.613 [2024-11-28 00:09:24.892193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.083299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.083341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:10.613 [2024-11-28 00:09:25.083355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 191.090 ms 00:15:10.613 [2024-11-28 00:09:25.083378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.089564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.089593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:10.613 [2024-11-28 00:09:25.089602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:15:10.613 [2024-11-28 00:09:25.089613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.091860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.091893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:10.613 [2024-11-28 00:09:25.091903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:15:10.613 [2024-11-28 00:09:25.091913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.096090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.096126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:10.613 [2024-11-28 00:09:25.096135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.149 ms 00:15:10.613 [2024-11-28 00:09:25.096152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.096258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.096269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:10.613 [2024-11-28 00:09:25.096278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:15:10.613 [2024-11-28 00:09:25.096286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.098735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.098860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:10.613 [2024-11-28 00:09:25.098874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:15:10.613 [2024-11-28 00:09:25.098886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.100824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.100857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:10.613 [2024-11-28 00:09:25.100865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:15:10.613 [2024-11-28 00:09:25.100873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.102660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.102691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:10.613 [2024-11-28 00:09:25.102700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:15:10.613 [2024-11-28 00:09:25.102708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.104373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.613 [2024-11-28 00:09:25.104403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:10.613 [2024-11-28 00:09:25.104411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:15:10.613 [2024-11-28 00:09:25.104419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.613 [2024-11-28 00:09:25.104445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:10.613 [2024-11-28 00:09:25.104464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:10.613 [2024-11-28 00:09:25.104952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.104968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.104976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.104985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.104992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:10.614 [2024-11-28 00:09:25.105376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:10.614 [2024-11-28 00:09:25.105384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5365d95f-2b75-40c7-b686-01d3645e9f21 00:15:10.614 [2024-11-28 00:09:25.105393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:10.614 [2024-11-28 00:09:25.105400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:10.614 [2024-11-28 00:09:25.105409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:10.614 [2024-11-28 00:09:25.105417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:10.614 [2024-11-28 00:09:25.105426] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:10.614 [2024-11-28 00:09:25.105437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:10.614 [2024-11-28 00:09:25.105446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:10.614 [2024-11-28 00:09:25.105452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:10.614 [2024-11-28 00:09:25.105460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:10.614 [2024-11-28 00:09:25.105466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.614 [2024-11-28 00:09:25.105475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:10.614 [2024-11-28 00:09:25.105484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:15:10.614 [2024-11-28 00:09:25.105514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.106874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.614 [2024-11-28 00:09:25.106893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:10.614 [2024-11-28 00:09:25.106901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:15:10.614 [2024-11-28 00:09:25.106910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.106957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.614 [2024-11-28 00:09:25.106969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:10.614 [2024-11-28 00:09:25.106976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:10.614 [2024-11-28 00:09:25.106985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.112026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.112058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:10.614 [2024-11-28 00:09:25.112067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.112076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.112123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.112134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:10.614 [2024-11-28 00:09:25.112142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.112156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.112200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.112211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:10.614 [2024-11-28 00:09:25.112218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.112227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.112240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.112253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:10.614 [2024-11-28 00:09:25.112262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.112270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.120269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.120425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:10.614 [2024-11-28 00:09:25.120442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.120451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.124003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.124037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:10.614 [2024-11-28 00:09:25.124046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.124061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.124110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.124121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:10.614 [2024-11-28 00:09:25.124128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.614 [2024-11-28 00:09:25.124137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.614 [2024-11-28 00:09:25.124159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.614 [2024-11-28 00:09:25.124169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:10.615 [2024-11-28 00:09:25.124176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.615 [2024-11-28 00:09:25.124187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.615 [2024-11-28 00:09:25.124244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.615 [2024-11-28 00:09:25.124255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:10.615 [2024-11-28 00:09:25.124262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.615 [2024-11-28 00:09:25.124271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.615 [2024-11-28 00:09:25.124304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.615 [2024-11-28 00:09:25.124314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:10.615 [2024-11-28 00:09:25.124322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.615 [2024-11-28 00:09:25.124332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.615 [2024-11-28 00:09:25.124488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.615 [2024-11-28 00:09:25.124522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:10.615 [2024-11-28 00:09:25.124543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.615 [2024-11-28 00:09:25.124563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.615 [2024-11-28 00:09:25.124619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.615 [2024-11-28 00:09:25.124645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:10.615 [2024-11-28 00:09:25.124664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.615 [2024-11-28 00:09:25.124686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.615 [2024-11-28 00:09:25.124859] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 235.765 ms, result 0 00:15:10.615 true 00:15:10.615 00:09:25 -- ftl/bdevperf.sh@37 -- # killprocess 82768 00:15:10.615 00:09:25 -- common/autotest_common.sh@936 -- # '[' -z 82768 ']' 00:15:10.615 00:09:25 -- common/autotest_common.sh@940 -- # kill -0 82768 00:15:10.615 00:09:25 -- common/autotest_common.sh@941 -- # uname 00:15:10.615 00:09:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.615 00:09:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82768 00:15:10.615 killing process with pid 82768 00:15:10.615 Received shutdown signal, test time was about 4.000000 seconds 00:15:10.615 00:15:10.615 Latency(us) 00:15:10.615 [2024-11-28T00:09:25.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.615 [2024-11-28T00:09:25.217Z] =================================================================================================================== 00:15:10.615 [2024-11-28T00:09:25.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.615 00:09:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.615 00:09:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.615 00:09:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82768' 00:15:10.615 00:09:25 -- common/autotest_common.sh@955 -- # kill 82768 00:15:10.615 00:09:25 -- common/autotest_common.sh@960 -- # wait 82768 00:15:15.885 00:09:29 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:15.885 00:09:29 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:15.885 00:09:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.885 00:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.885 Remove shared memory files 00:15:15.885 00:09:29 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:15.885 00:09:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:15.885 00:09:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:15.885 00:09:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:15.885 00:09:29 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:15.885 00:09:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:15.885 00:09:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:15.885 ************************************ 00:15:15.885 END TEST ftl_bdevperf 00:15:15.885 ************************************ 00:15:15.885 00:15:15.885 real 0m23.804s 00:15:15.885 user 0m26.231s 00:15:15.885 sys 0m0.836s 00:15:15.885 00:09:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:15.885 00:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.885 00:09:29 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:15.885 00:09:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:15.885 00:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.885 00:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.885 ************************************ 00:15:15.885 START TEST ftl_trim 00:15:15.885 ************************************ 00:15:15.885 00:09:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:15.885 * Looking for test storage... 00:15:15.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.885 00:09:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:15.885 00:09:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:15.885 00:09:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:15.885 00:09:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:15.885 00:09:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:15.885 00:09:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:15.885 00:09:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:15.885 00:09:29 -- scripts/common.sh@335 -- # IFS=.-: 00:15:15.885 00:09:29 -- scripts/common.sh@335 -- # read -ra ver1 00:15:15.885 00:09:29 -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.885 00:09:29 -- scripts/common.sh@336 -- # read -ra ver2 00:15:15.885 00:09:29 -- scripts/common.sh@337 -- # local 'op=<' 00:15:15.885 00:09:29 -- scripts/common.sh@339 -- # ver1_l=2 00:15:15.885 00:09:29 -- scripts/common.sh@340 -- # ver2_l=1 00:15:15.885 00:09:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:15.885 00:09:29 -- scripts/common.sh@343 -- # case "$op" in 00:15:15.885 00:09:29 -- scripts/common.sh@344 -- # : 1 00:15:15.885 00:09:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:15.885 00:09:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.885 00:09:29 -- scripts/common.sh@364 -- # decimal 1 00:15:15.885 00:09:29 -- scripts/common.sh@352 -- # local d=1 00:15:15.885 00:09:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.885 00:09:29 -- scripts/common.sh@354 -- # echo 1 00:15:15.885 00:09:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:15.885 00:09:29 -- scripts/common.sh@365 -- # decimal 2 00:15:15.885 00:09:29 -- scripts/common.sh@352 -- # local d=2 00:15:15.885 00:09:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.885 00:09:29 -- scripts/common.sh@354 -- # echo 2 00:15:15.885 00:09:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:15.885 00:09:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:15.885 00:09:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:15.885 00:09:29 -- scripts/common.sh@367 -- # return 0 00:15:15.885 00:09:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.885 00:09:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.886 --rc genhtml_branch_coverage=1 00:15:15.886 --rc genhtml_function_coverage=1 00:15:15.886 --rc genhtml_legend=1 00:15:15.886 --rc geninfo_all_blocks=1 00:15:15.886 --rc geninfo_unexecuted_blocks=1 00:15:15.886 00:15:15.886 ' 00:15:15.886 00:09:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.886 --rc genhtml_branch_coverage=1 00:15:15.886 --rc genhtml_function_coverage=1 00:15:15.886 --rc genhtml_legend=1 00:15:15.886 --rc geninfo_all_blocks=1 00:15:15.886 --rc geninfo_unexecuted_blocks=1 00:15:15.886 00:15:15.886 ' 00:15:15.886 00:09:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.886 --rc genhtml_branch_coverage=1 00:15:15.886 --rc genhtml_function_coverage=1 00:15:15.886 --rc genhtml_legend=1 00:15:15.886 --rc geninfo_all_blocks=1 00:15:15.886 --rc geninfo_unexecuted_blocks=1 00:15:15.886 00:15:15.886 ' 00:15:15.886 00:09:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:15.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.886 --rc genhtml_branch_coverage=1 00:15:15.886 --rc genhtml_function_coverage=1 00:15:15.886 --rc genhtml_legend=1 00:15:15.886 --rc geninfo_all_blocks=1 00:15:15.886 --rc geninfo_unexecuted_blocks=1 00:15:15.886 00:15:15.886 ' 00:15:15.886 00:09:29 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:15.886 00:09:29 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:15.886 00:09:29 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.886 00:09:29 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:15.886 00:09:29 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:15.886 00:09:29 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:15.886 00:09:29 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.886 00:09:29 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:15.886 00:09:29 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:15.886 00:09:29 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.886 00:09:29 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.886 00:09:29 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:15.886 00:09:29 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:15.886 00:09:29 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.886 00:09:29 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:15.886 00:09:29 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:15.886 00:09:29 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:15.886 00:09:29 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.886 00:09:29 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.886 00:09:29 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:15.886 00:09:29 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:15.886 00:09:29 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.886 00:09:29 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:15.886 00:09:29 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.886 00:09:29 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:15.886 00:09:29 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:15.886 00:09:29 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:15.886 00:09:29 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.886 00:09:29 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:15.886 00:09:29 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.886 00:09:29 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:15.886 00:09:29 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:15.886 00:09:29 -- ftl/trim.sh@25 -- # timeout=240 00:15:15.886 00:09:29 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:15.886 00:09:29 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:15.886 00:09:29 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:15.886 00:09:29 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:15.886 00:09:29 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:15.886 00:09:29 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:15.886 00:09:29 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:15.886 00:09:29 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:15.886 00:09:29 -- ftl/trim.sh@40 -- # svcpid=83157 00:15:15.886 00:09:29 -- ftl/trim.sh@41 -- # waitforlisten 83157 00:15:15.886 00:09:29 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:15.886 00:09:29 -- common/autotest_common.sh@829 -- # '[' -z 83157 ']' 00:15:15.886 00:09:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.886 00:09:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.886 00:09:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.886 00:09:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.886 00:09:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.886 [2024-11-28 00:09:30.061125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.886 [2024-11-28 00:09:30.061426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83157 ] 00:15:15.886 [2024-11-28 00:09:30.208431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.886 [2024-11-28 00:09:30.240948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.886 [2024-11-28 00:09:30.241610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.886 [2024-11-28 00:09:30.241912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.886 [2024-11-28 00:09:30.241942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.454 00:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.454 00:09:30 -- common/autotest_common.sh@862 -- # return 0 00:15:16.454 00:09:30 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:16.454 00:09:30 -- ftl/common.sh@54 -- # local name=nvme0 00:15:16.454 00:09:30 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:16.454 00:09:30 -- ftl/common.sh@56 -- # local size=103424 00:15:16.454 00:09:30 -- ftl/common.sh@59 -- # local base_bdev 00:15:16.454 00:09:30 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:16.712 00:09:31 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:16.712 00:09:31 -- ftl/common.sh@62 -- # local base_size 00:15:16.712 00:09:31 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:16.712 00:09:31 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:16.712 00:09:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:16.712 00:09:31 -- common/autotest_common.sh@1369 -- # local bs 00:15:16.712 00:09:31 -- common/autotest_common.sh@1370 -- # local nb 00:15:16.712 00:09:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:16.972 00:09:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:16.972 { 00:15:16.972 "name": "nvme0n1", 00:15:16.972 "aliases": [ 00:15:16.972 "36065261-4692-4541-b87c-0002d5998549" 00:15:16.972 ], 00:15:16.972 "product_name": "NVMe disk", 00:15:16.972 "block_size": 4096, 00:15:16.972 "num_blocks": 1310720, 00:15:16.972 "uuid": "36065261-4692-4541-b87c-0002d5998549", 00:15:16.972 "assigned_rate_limits": { 00:15:16.972 "rw_ios_per_sec": 0, 00:15:16.972 "rw_mbytes_per_sec": 0, 00:15:16.972 "r_mbytes_per_sec": 0, 00:15:16.972 "w_mbytes_per_sec": 0 00:15:16.972 }, 00:15:16.972 "claimed": true, 00:15:16.972 "claim_type": "read_many_write_one", 00:15:16.972 "zoned": false, 00:15:16.972 "supported_io_types": { 00:15:16.972 "read": true, 00:15:16.972 "write": true, 00:15:16.972 "unmap": true, 00:15:16.972 "write_zeroes": true, 00:15:16.972 "flush": true, 00:15:16.972 "reset": true, 00:15:16.972 "compare": true, 00:15:16.972 "compare_and_write": false, 00:15:16.972 "abort": true, 00:15:16.972 "nvme_admin": true, 00:15:16.972 "nvme_io": true 00:15:16.972 }, 00:15:16.972 "driver_specific": { 00:15:16.972 "nvme": [ 00:15:16.972 { 00:15:16.972 "pci_address": "0000:00:07.0", 00:15:16.972 "trid": { 00:15:16.972 "trtype": "PCIe", 00:15:16.972 "traddr": "0000:00:07.0" 00:15:16.972 }, 00:15:16.972 "ctrlr_data": { 00:15:16.972 "cntlid": 0, 00:15:16.972 "vendor_id": "0x1b36", 00:15:16.972 "model_number": "QEMU NVMe Ctrl", 00:15:16.972 "serial_number": "12341", 00:15:16.972 "firmware_revision": "8.0.0", 00:15:16.972 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:16.972 "oacs": { 00:15:16.972 "security": 0, 00:15:16.972 "format": 1, 00:15:16.972 "firmware": 0, 00:15:16.972 "ns_manage": 1 00:15:16.972 }, 00:15:16.972 "multi_ctrlr": false, 00:15:16.972 "ana_reporting": false 00:15:16.972 }, 00:15:16.972 "vs": { 00:15:16.972 "nvme_version": "1.4" 00:15:16.972 }, 00:15:16.972 "ns_data": { 00:15:16.972 "id": 1, 00:15:16.972 "can_share": false 00:15:16.972 } 00:15:16.972 } 00:15:16.972 ], 00:15:16.972 "mp_policy": "active_passive" 00:15:16.972 } 00:15:16.972 } 00:15:16.972 ]' 00:15:16.972 00:09:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:16.972 00:09:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:16.972 00:09:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:16.972 00:09:31 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:16.972 00:09:31 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:16.972 00:09:31 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:16.972 00:09:31 -- ftl/common.sh@63 -- # base_size=5120 00:15:16.972 00:09:31 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:16.972 00:09:31 -- ftl/common.sh@67 -- # clear_lvols 00:15:16.972 00:09:31 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:16.972 00:09:31 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:17.233 00:09:31 -- ftl/common.sh@28 -- # stores=3e3dd00e-77e1-4e19-a3b4-86c8589981e5 00:15:17.233 00:09:31 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:17.233 00:09:31 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e3dd00e-77e1-4e19-a3b4-86c8589981e5 00:15:17.233 00:09:31 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:17.492 00:09:31 -- ftl/common.sh@68 -- # lvs=cc16a321-578d-403c-88d3-b4dfcdde7a92 00:15:17.492 00:09:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cc16a321-578d-403c-88d3-b4dfcdde7a92 00:15:17.751 00:09:32 -- ftl/trim.sh@43 -- # split_bdev=0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:17.751 00:09:32 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:17.751 00:09:32 -- ftl/common.sh@35 -- # local name=nvc0 00:15:17.751 00:09:32 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:17.751 00:09:32 -- ftl/common.sh@37 -- # local base_bdev=0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:17.751 00:09:32 -- ftl/common.sh@38 -- # local cache_size= 00:15:17.751 00:09:32 -- ftl/common.sh@41 -- # get_bdev_size 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:17.751 00:09:32 -- common/autotest_common.sh@1367 -- # local bdev_name=0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:17.751 00:09:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:17.751 00:09:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:17.751 00:09:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:17.751 00:09:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.010 00:09:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:18.010 { 00:15:18.010 "name": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.010 "aliases": [ 00:15:18.010 "lvs/nvme0n1p0" 00:15:18.010 ], 00:15:18.010 "product_name": "Logical Volume", 00:15:18.010 "block_size": 4096, 00:15:18.010 "num_blocks": 26476544, 00:15:18.010 "uuid": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.010 "assigned_rate_limits": { 00:15:18.010 "rw_ios_per_sec": 0, 00:15:18.010 "rw_mbytes_per_sec": 0, 00:15:18.010 "r_mbytes_per_sec": 0, 00:15:18.010 "w_mbytes_per_sec": 0 00:15:18.010 }, 00:15:18.010 "claimed": false, 00:15:18.010 "zoned": false, 00:15:18.010 "supported_io_types": { 00:15:18.010 "read": true, 00:15:18.010 "write": true, 00:15:18.010 "unmap": true, 00:15:18.010 "write_zeroes": true, 00:15:18.010 "flush": false, 00:15:18.010 "reset": true, 00:15:18.010 "compare": false, 00:15:18.010 "compare_and_write": false, 00:15:18.010 "abort": false, 00:15:18.010 "nvme_admin": false, 00:15:18.010 "nvme_io": false 00:15:18.010 }, 00:15:18.010 "driver_specific": { 00:15:18.010 "lvol": { 00:15:18.010 "lvol_store_uuid": "cc16a321-578d-403c-88d3-b4dfcdde7a92", 00:15:18.010 "base_bdev": "nvme0n1", 00:15:18.010 "thin_provision": true, 00:15:18.010 "snapshot": false, 00:15:18.010 "clone": false, 00:15:18.010 "esnap_clone": false 00:15:18.010 } 00:15:18.010 } 00:15:18.010 } 00:15:18.010 ]' 00:15:18.010 00:09:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:18.010 00:09:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:18.010 00:09:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:18.010 00:09:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:18.010 00:09:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:18.010 00:09:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:18.010 00:09:32 -- ftl/common.sh@41 -- # local base_size=5171 00:15:18.010 00:09:32 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:18.010 00:09:32 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:18.289 00:09:32 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.289 00:09:32 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.289 00:09:32 -- ftl/common.sh@48 -- # get_bdev_size 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.289 00:09:32 -- common/autotest_common.sh@1367 -- # local bdev_name=0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.289 00:09:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:18.289 00:09:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:18.289 00:09:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:18.289 00:09:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.551 00:09:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:18.551 { 00:15:18.551 "name": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.551 "aliases": [ 00:15:18.551 "lvs/nvme0n1p0" 00:15:18.551 ], 00:15:18.551 "product_name": "Logical Volume", 00:15:18.551 "block_size": 4096, 00:15:18.551 "num_blocks": 26476544, 00:15:18.551 "uuid": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.551 "assigned_rate_limits": { 00:15:18.551 "rw_ios_per_sec": 0, 00:15:18.551 "rw_mbytes_per_sec": 0, 00:15:18.551 "r_mbytes_per_sec": 0, 00:15:18.551 "w_mbytes_per_sec": 0 00:15:18.551 }, 00:15:18.551 "claimed": false, 00:15:18.551 "zoned": false, 00:15:18.551 "supported_io_types": { 00:15:18.551 "read": true, 00:15:18.551 "write": true, 00:15:18.551 "unmap": true, 00:15:18.551 "write_zeroes": true, 00:15:18.551 "flush": false, 00:15:18.551 "reset": true, 00:15:18.551 "compare": false, 00:15:18.551 "compare_and_write": false, 00:15:18.551 "abort": false, 00:15:18.551 "nvme_admin": false, 00:15:18.551 "nvme_io": false 00:15:18.551 }, 00:15:18.551 "driver_specific": { 00:15:18.551 "lvol": { 00:15:18.551 "lvol_store_uuid": "cc16a321-578d-403c-88d3-b4dfcdde7a92", 00:15:18.551 "base_bdev": "nvme0n1", 00:15:18.551 "thin_provision": true, 00:15:18.551 "snapshot": false, 00:15:18.551 "clone": false, 00:15:18.551 "esnap_clone": false 00:15:18.551 } 00:15:18.551 } 00:15:18.551 } 00:15:18.551 ]' 00:15:18.551 00:09:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:18.551 00:09:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:18.551 00:09:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:18.551 00:09:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:18.551 00:09:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:18.551 00:09:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:18.551 00:09:32 -- ftl/common.sh@48 -- # cache_size=5171 00:15:18.551 00:09:32 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:18.551 00:09:33 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:18.551 00:09:33 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:18.810 00:09:33 -- ftl/trim.sh@47 -- # get_bdev_size 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.810 00:09:33 -- common/autotest_common.sh@1367 -- # local bdev_name=0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.810 00:09:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:18.810 00:09:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:18.810 00:09:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:18.810 00:09:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0201a2b7-96eb-4378-a0bc-5a843f907a7e 00:15:18.810 00:09:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:18.810 { 00:15:18.810 "name": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.810 "aliases": [ 00:15:18.810 "lvs/nvme0n1p0" 00:15:18.810 ], 00:15:18.810 "product_name": "Logical Volume", 00:15:18.810 "block_size": 4096, 00:15:18.810 "num_blocks": 26476544, 00:15:18.810 "uuid": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:18.810 "assigned_rate_limits": { 00:15:18.810 "rw_ios_per_sec": 0, 00:15:18.810 "rw_mbytes_per_sec": 0, 00:15:18.810 "r_mbytes_per_sec": 0, 00:15:18.810 "w_mbytes_per_sec": 0 00:15:18.810 }, 00:15:18.810 "claimed": false, 00:15:18.810 "zoned": false, 00:15:18.810 "supported_io_types": { 00:15:18.810 "read": true, 00:15:18.810 "write": true, 00:15:18.810 "unmap": true, 00:15:18.810 "write_zeroes": true, 00:15:18.810 "flush": false, 00:15:18.810 "reset": true, 00:15:18.810 "compare": false, 00:15:18.810 "compare_and_write": false, 00:15:18.810 "abort": false, 00:15:18.810 "nvme_admin": false, 00:15:18.810 "nvme_io": false 00:15:18.810 }, 00:15:18.810 "driver_specific": { 00:15:18.810 "lvol": { 00:15:18.810 "lvol_store_uuid": "cc16a321-578d-403c-88d3-b4dfcdde7a92", 00:15:18.810 "base_bdev": "nvme0n1", 00:15:18.810 "thin_provision": true, 00:15:18.810 "snapshot": false, 00:15:18.810 "clone": false, 00:15:18.810 "esnap_clone": false 00:15:18.810 } 00:15:18.810 } 00:15:18.810 } 00:15:18.810 ]' 00:15:18.810 00:09:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:18.810 00:09:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:18.810 00:09:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:18.810 00:09:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:18.810 00:09:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:18.810 00:09:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:18.810 00:09:33 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:18.810 00:09:33 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0201a2b7-96eb-4378-a0bc-5a843f907a7e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:19.070 [2024-11-28 00:09:33.553962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.554002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:19.070 [2024-11-28 00:09:33.554014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:19.070 [2024-11-28 00:09:33.554020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.555833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.555950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:19.070 [2024-11-28 00:09:33.555967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:15:19.070 [2024-11-28 00:09:33.555973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.556037] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:19.070 [2024-11-28 00:09:33.556219] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:19.070 [2024-11-28 00:09:33.556232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.556238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:19.070 [2024-11-28 00:09:33.556247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:15:19.070 [2024-11-28 00:09:33.556252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.556326] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:19.070 [2024-11-28 00:09:33.557269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.557298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:19.070 [2024-11-28 00:09:33.557305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:19.070 [2024-11-28 00:09:33.557313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.561996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.562034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:19.070 [2024-11-28 00:09:33.562042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:15:19.070 [2024-11-28 00:09:33.562052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.562149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.562159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:19.070 [2024-11-28 00:09:33.562166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:19.070 [2024-11-28 00:09:33.562183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.562210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.562217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:19.070 [2024-11-28 00:09:33.562223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:19.070 [2024-11-28 00:09:33.562229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.562254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:19.070 [2024-11-28 00:09:33.563503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.563526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:19.070 [2024-11-28 00:09:33.563543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:15:19.070 [2024-11-28 00:09:33.563549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.563585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.563592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:19.070 [2024-11-28 00:09:33.563601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:19.070 [2024-11-28 00:09:33.563606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.563631] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:19.070 [2024-11-28 00:09:33.563729] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:19.070 [2024-11-28 00:09:33.563741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:19.070 [2024-11-28 00:09:33.563749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:19.070 [2024-11-28 00:09:33.563758] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:19.070 [2024-11-28 00:09:33.563764] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:19.070 [2024-11-28 00:09:33.563795] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:19.070 [2024-11-28 00:09:33.563808] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:19.070 [2024-11-28 00:09:33.563815] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:19.070 [2024-11-28 00:09:33.563821] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:19.070 [2024-11-28 00:09:33.563828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.563833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:19.070 [2024-11-28 00:09:33.563841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:15:19.070 [2024-11-28 00:09:33.563846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.563903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.070 [2024-11-28 00:09:33.563910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:19.070 [2024-11-28 00:09:33.563925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:19.070 [2024-11-28 00:09:33.563931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.070 [2024-11-28 00:09:33.564012] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:19.070 [2024-11-28 00:09:33.564019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:19.070 [2024-11-28 00:09:33.564034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.070 [2024-11-28 00:09:33.564047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:19.070 [2024-11-28 00:09:33.564058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:19.070 [2024-11-28 00:09:33.564069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:19.070 [2024-11-28 00:09:33.564075] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.070 [2024-11-28 00:09:33.564086] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:19.070 [2024-11-28 00:09:33.564092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:19.070 [2024-11-28 00:09:33.564100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.070 [2024-11-28 00:09:33.564105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:19.070 [2024-11-28 00:09:33.564111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:19.070 [2024-11-28 00:09:33.564117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:19.070 [2024-11-28 00:09:33.564129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:19.070 [2024-11-28 00:09:33.564134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:19.070 [2024-11-28 00:09:33.564146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:19.070 [2024-11-28 00:09:33.564152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:19.070 [2024-11-28 00:09:33.564159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:19.070 [2024-11-28 00:09:33.564165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:19.070 [2024-11-28 00:09:33.564173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.070 [2024-11-28 00:09:33.564178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:19.071 [2024-11-28 00:09:33.564185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:19.071 [2024-11-28 00:09:33.564191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.071 [2024-11-28 00:09:33.564199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:19.071 [2024-11-28 00:09:33.564204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:19.071 [2024-11-28 00:09:33.564211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.071 [2024-11-28 00:09:33.564217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:19.071 [2024-11-28 00:09:33.564224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:19.071 [2024-11-28 00:09:33.564230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:19.071 [2024-11-28 00:09:33.564236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:19.071 [2024-11-28 00:09:33.564242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:19.071 [2024-11-28 00:09:33.564249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.071 [2024-11-28 00:09:33.564254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:19.071 [2024-11-28 00:09:33.564261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:19.071 [2024-11-28 00:09:33.564267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.071 [2024-11-28 00:09:33.564273] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:19.071 [2024-11-28 00:09:33.564279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:19.071 [2024-11-28 00:09:33.564287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.071 [2024-11-28 00:09:33.564293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.071 [2024-11-28 00:09:33.564304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:19.071 [2024-11-28 00:09:33.564310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:19.071 [2024-11-28 00:09:33.564316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:19.071 [2024-11-28 00:09:33.564323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:19.071 [2024-11-28 00:09:33.564332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:19.071 [2024-11-28 00:09:33.564338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:19.071 [2024-11-28 00:09:33.564346] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:19.071 [2024-11-28 00:09:33.564381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.071 [2024-11-28 00:09:33.564390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:19.071 [2024-11-28 00:09:33.564397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:19.071 [2024-11-28 00:09:33.564404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:19.071 [2024-11-28 00:09:33.564410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:19.071 [2024-11-28 00:09:33.564418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:19.071 [2024-11-28 00:09:33.564424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:19.071 [2024-11-28 00:09:33.564432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:19.071 [2024-11-28 00:09:33.564438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:19.071 [2024-11-28 00:09:33.564447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:19.071 [2024-11-28 00:09:33.564457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:19.071 [2024-11-28 00:09:33.564464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:19.071 [2024-11-28 00:09:33.564471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:19.071 [2024-11-28 00:09:33.564478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:19.071 [2024-11-28 00:09:33.564484] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:19.071 [2024-11-28 00:09:33.564492] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.071 [2024-11-28 00:09:33.564501] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:19.071 [2024-11-28 00:09:33.564508] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:19.071 [2024-11-28 00:09:33.564514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:19.071 [2024-11-28 00:09:33.564521] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:19.071 [2024-11-28 00:09:33.564528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.564536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:19.071 [2024-11-28 00:09:33.564543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:15:19.071 [2024-11-28 00:09:33.564551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.569860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.569975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:19.071 [2024-11-28 00:09:33.569987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:15:19.071 [2024-11-28 00:09:33.570002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.570088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.570096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:19.071 [2024-11-28 00:09:33.570103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:19.071 [2024-11-28 00:09:33.570110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.577975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.578004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:19.071 [2024-11-28 00:09:33.578011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.846 ms 00:15:19.071 [2024-11-28 00:09:33.578018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.578063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.578071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:19.071 [2024-11-28 00:09:33.578077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:19.071 [2024-11-28 00:09:33.578084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.578379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.578405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:19.071 [2024-11-28 00:09:33.578412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:15:19.071 [2024-11-28 00:09:33.578419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.578509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.578516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:19.071 [2024-11-28 00:09:33.578522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:19.071 [2024-11-28 00:09:33.578538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.591298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.591335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:19.071 [2024-11-28 00:09:33.591346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.735 ms 00:15:19.071 [2024-11-28 00:09:33.591353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.071 [2024-11-28 00:09:33.601800] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:19.071 [2024-11-28 00:09:33.615115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.071 [2024-11-28 00:09:33.615230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:19.071 [2024-11-28 00:09:33.615247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.659 ms 00:15:19.071 [2024-11-28 00:09:33.615255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.330 [2024-11-28 00:09:33.675742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.330 [2024-11-28 00:09:33.675783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:19.330 [2024-11-28 00:09:33.675794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.432 ms 00:15:19.330 [2024-11-28 00:09:33.675800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.330 [2024-11-28 00:09:33.675857] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:19.330 [2024-11-28 00:09:33.675867] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:21.862 [2024-11-28 00:09:35.971157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.971247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:21.862 [2024-11-28 00:09:35.971281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2295.276 ms 00:15:21.862 [2024-11-28 00:09:35.971297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:35.971699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.971722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:21.862 [2024-11-28 00:09:35.971755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:15:21.862 [2024-11-28 00:09:35.971769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:35.975313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.975350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:21.862 [2024-11-28 00:09:35.975378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:15:21.862 [2024-11-28 00:09:35.975389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:35.978046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.978077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:21.862 [2024-11-28 00:09:35.978088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:15:21.862 [2024-11-28 00:09:35.978096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:35.978279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.978288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:21.862 [2024-11-28 00:09:35.978298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:21.862 [2024-11-28 00:09:35.978316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:35.998605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:35.998769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:21.862 [2024-11-28 00:09:35.998790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.258 ms 00:15:21.862 [2024-11-28 00:09:35.998800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.002582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:36.002623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:21.862 [2024-11-28 00:09:36.002637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:15:21.862 [2024-11-28 00:09:36.002646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.006278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:36.006425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:21.862 [2024-11-28 00:09:36.006443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:15:21.862 [2024-11-28 00:09:36.006453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.009730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:36.009844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:21.862 [2024-11-28 00:09:36.009862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:15:21.862 [2024-11-28 00:09:36.009869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.009915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:36.009924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:21.862 [2024-11-28 00:09:36.009934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:21.862 [2024-11-28 00:09:36.009941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.010015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.862 [2024-11-28 00:09:36.010034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:21.862 [2024-11-28 00:09:36.010045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:21.862 [2024-11-28 00:09:36.010052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.862 [2024-11-28 00:09:36.010869] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:21.862 [2024-11-28 00:09:36.011811] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2456.626 ms, result 0 00:15:21.862 [2024-11-28 00:09:36.012680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:21.862 { 00:15:21.862 "name": "ftl0", 00:15:21.862 "uuid": "22b2d521-b6b4-4305-b48e-c46298f0d7de" 00:15:21.862 } 00:15:21.862 00:09:36 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:21.862 00:09:36 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:21.862 00:09:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:21.863 00:09:36 -- common/autotest_common.sh@899 -- # local i 00:15:21.863 00:09:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:21.863 00:09:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:21.863 00:09:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:21.863 00:09:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:21.863 [ 00:15:21.863 { 00:15:21.863 "name": "ftl0", 00:15:21.863 "aliases": [ 00:15:21.863 "22b2d521-b6b4-4305-b48e-c46298f0d7de" 00:15:21.863 ], 00:15:21.863 "product_name": "FTL disk", 00:15:21.863 "block_size": 4096, 00:15:21.863 "num_blocks": 23592960, 00:15:21.863 "uuid": "22b2d521-b6b4-4305-b48e-c46298f0d7de", 00:15:21.863 "assigned_rate_limits": { 00:15:21.863 "rw_ios_per_sec": 0, 00:15:21.863 "rw_mbytes_per_sec": 0, 00:15:21.863 "r_mbytes_per_sec": 0, 00:15:21.863 "w_mbytes_per_sec": 0 00:15:21.863 }, 00:15:21.863 "claimed": false, 00:15:21.863 "zoned": false, 00:15:21.863 "supported_io_types": { 00:15:21.863 "read": true, 00:15:21.863 "write": true, 00:15:21.863 "unmap": true, 00:15:21.863 "write_zeroes": true, 00:15:21.863 "flush": true, 00:15:21.863 "reset": false, 00:15:21.863 "compare": false, 00:15:21.863 "compare_and_write": false, 00:15:21.863 "abort": false, 00:15:21.863 "nvme_admin": false, 00:15:21.863 "nvme_io": false 00:15:21.863 }, 00:15:21.863 "driver_specific": { 00:15:21.863 "ftl": { 00:15:21.863 "base_bdev": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:21.863 "cache": "nvc0n1p0" 00:15:21.863 } 00:15:21.863 } 00:15:21.863 } 00:15:21.863 ] 00:15:21.863 00:09:36 -- common/autotest_common.sh@905 -- # return 0 00:15:21.863 00:09:36 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:21.863 00:09:36 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:22.121 00:09:36 -- ftl/trim.sh@56 -- # echo ']}' 00:15:22.121 00:09:36 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:22.380 00:09:36 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:22.380 { 00:15:22.380 "name": "ftl0", 00:15:22.380 "aliases": [ 00:15:22.380 "22b2d521-b6b4-4305-b48e-c46298f0d7de" 00:15:22.380 ], 00:15:22.380 "product_name": "FTL disk", 00:15:22.380 "block_size": 4096, 00:15:22.380 "num_blocks": 23592960, 00:15:22.380 "uuid": "22b2d521-b6b4-4305-b48e-c46298f0d7de", 00:15:22.380 "assigned_rate_limits": { 00:15:22.380 "rw_ios_per_sec": 0, 00:15:22.380 "rw_mbytes_per_sec": 0, 00:15:22.380 "r_mbytes_per_sec": 0, 00:15:22.380 "w_mbytes_per_sec": 0 00:15:22.380 }, 00:15:22.380 "claimed": false, 00:15:22.380 "zoned": false, 00:15:22.380 "supported_io_types": { 00:15:22.380 "read": true, 00:15:22.380 "write": true, 00:15:22.380 "unmap": true, 00:15:22.380 "write_zeroes": true, 00:15:22.380 "flush": true, 00:15:22.380 "reset": false, 00:15:22.380 "compare": false, 00:15:22.380 "compare_and_write": false, 00:15:22.380 "abort": false, 00:15:22.380 "nvme_admin": false, 00:15:22.380 "nvme_io": false 00:15:22.380 }, 00:15:22.380 "driver_specific": { 00:15:22.380 "ftl": { 00:15:22.380 "base_bdev": "0201a2b7-96eb-4378-a0bc-5a843f907a7e", 00:15:22.380 "cache": "nvc0n1p0" 00:15:22.380 } 00:15:22.380 } 00:15:22.380 } 00:15:22.380 ]' 00:15:22.380 00:09:36 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:22.380 00:09:36 -- ftl/trim.sh@60 -- # nb=23592960 00:15:22.380 00:09:36 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:22.380 [2024-11-28 00:09:36.961359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.961418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:22.380 [2024-11-28 00:09:36.961433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:22.380 [2024-11-28 00:09:36.961442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.961478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:22.380 [2024-11-28 00:09:36.961911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.961926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:22.380 [2024-11-28 00:09:36.961936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:15:22.380 [2024-11-28 00:09:36.961944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.962478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.962509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:22.380 [2024-11-28 00:09:36.962523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:15:22.380 [2024-11-28 00:09:36.962531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.966180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.966201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:22.380 [2024-11-28 00:09:36.966213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:15:22.380 [2024-11-28 00:09:36.966221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.973125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.973156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:22.380 [2024-11-28 00:09:36.973168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:15:22.380 [2024-11-28 00:09:36.973175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.974800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.974831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:22.380 [2024-11-28 00:09:36.974842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:15:22.380 [2024-11-28 00:09:36.974849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.979227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.979259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:22.380 [2024-11-28 00:09:36.979284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:15:22.380 [2024-11-28 00:09:36.979292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.380 [2024-11-28 00:09:36.979469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.380 [2024-11-28 00:09:36.979479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:22.381 [2024-11-28 00:09:36.979488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:15:22.381 [2024-11-28 00:09:36.979498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.381 [2024-11-28 00:09:36.980991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.381 [2024-11-28 00:09:36.981020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:22.381 [2024-11-28 00:09:36.981030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:15:22.381 [2024-11-28 00:09:36.981036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.641 [2024-11-28 00:09:36.982200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.641 [2024-11-28 00:09:36.982230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:22.641 [2024-11-28 00:09:36.982241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:15:22.641 [2024-11-28 00:09:36.982249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.641 [2024-11-28 00:09:36.983178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.641 [2024-11-28 00:09:36.983293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:22.641 [2024-11-28 00:09:36.983313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:15:22.641 [2024-11-28 00:09:36.983320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.641 [2024-11-28 00:09:36.984501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.641 [2024-11-28 00:09:36.984525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:22.641 [2024-11-28 00:09:36.984535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:15:22.641 [2024-11-28 00:09:36.984542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.641 [2024-11-28 00:09:36.984582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:22.641 [2024-11-28 00:09:36.984606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:22.641 [2024-11-28 00:09:36.984956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.984966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.984974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.984982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.984990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.984998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:22.642 [2024-11-28 00:09:36.985491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:22.642 [2024-11-28 00:09:36.985508] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:22.642 [2024-11-28 00:09:36.985516] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:22.642 [2024-11-28 00:09:36.985524] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:22.642 [2024-11-28 00:09:36.985531] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:22.642 [2024-11-28 00:09:36.985543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:22.642 [2024-11-28 00:09:36.985550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:22.642 [2024-11-28 00:09:36.985568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:22.642 [2024-11-28 00:09:36.985577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:22.642 [2024-11-28 00:09:36.985585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:22.642 [2024-11-28 00:09:36.985592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:22.642 [2024-11-28 00:09:36.985600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.642 [2024-11-28 00:09:36.985607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:22.642 [2024-11-28 00:09:36.985616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:15:22.642 [2024-11-28 00:09:36.985623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.987032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.642 [2024-11-28 00:09:36.987052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:22.642 [2024-11-28 00:09:36.987063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:15:22.642 [2024-11-28 00:09:36.987069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.987150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.642 [2024-11-28 00:09:36.987159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:22.642 [2024-11-28 00:09:36.987171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:22.642 [2024-11-28 00:09:36.987178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.992155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.642 [2024-11-28 00:09:36.992187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:22.642 [2024-11-28 00:09:36.992198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.642 [2024-11-28 00:09:36.992205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.992287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.642 [2024-11-28 00:09:36.992297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:22.642 [2024-11-28 00:09:36.992305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.642 [2024-11-28 00:09:36.992312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.992384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.642 [2024-11-28 00:09:36.992393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:22.642 [2024-11-28 00:09:36.992402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.642 [2024-11-28 00:09:36.992409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.642 [2024-11-28 00:09:36.992443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:36.992451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:22.643 [2024-11-28 00:09:36.992462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:36.992468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.001430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.001469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:22.643 [2024-11-28 00:09:37.001481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.001488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:22.643 [2024-11-28 00:09:37.005220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:22.643 [2024-11-28 00:09:37.005284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:22.643 [2024-11-28 00:09:37.005408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:22.643 [2024-11-28 00:09:37.005522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:22.643 [2024-11-28 00:09:37.005613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:22.643 [2024-11-28 00:09:37.005695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:22.643 [2024-11-28 00:09:37.005768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:22.643 [2024-11-28 00:09:37.005780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:22.643 [2024-11-28 00:09:37.005788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.643 [2024-11-28 00:09:37.005945] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.576 ms, result 0 00:15:22.643 true 00:15:22.643 00:09:37 -- ftl/trim.sh@63 -- # killprocess 83157 00:15:22.643 00:09:37 -- common/autotest_common.sh@936 -- # '[' -z 83157 ']' 00:15:22.643 00:09:37 -- common/autotest_common.sh@940 -- # kill -0 83157 00:15:22.643 00:09:37 -- common/autotest_common.sh@941 -- # uname 00:15:22.643 00:09:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:22.643 00:09:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83157 00:15:22.643 killing process with pid 83157 00:15:22.643 00:09:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:22.643 00:09:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:22.643 00:09:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83157' 00:15:22.643 00:09:37 -- common/autotest_common.sh@955 -- # kill 83157 00:15:22.643 00:09:37 -- common/autotest_common.sh@960 -- # wait 83157 00:15:27.910 00:09:42 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:28.477 65536+0 records in 00:15:28.477 65536+0 records out 00:15:28.477 268435456 bytes (268 MB, 256 MiB) copied, 0.800797 s, 335 MB/s 00:15:28.477 00:09:42 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:28.477 [2024-11-28 00:09:42.920407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:28.477 [2024-11-28 00:09:42.920517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83361 ] 00:15:28.477 [2024-11-28 00:09:43.069318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.736 [2024-11-28 00:09:43.098436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.736 [2024-11-28 00:09:43.179920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:28.736 [2024-11-28 00:09:43.180164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:28.736 [2024-11-28 00:09:43.325533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.325700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:28.736 [2024-11-28 00:09:43.325789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:28.736 [2024-11-28 00:09:43.325816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.327970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.328084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:28.736 [2024-11-28 00:09:43.328098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:15:28.736 [2024-11-28 00:09:43.328106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.328168] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:28.736 [2024-11-28 00:09:43.328419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:28.736 [2024-11-28 00:09:43.328436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.328443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:28.736 [2024-11-28 00:09:43.328452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:15:28.736 [2024-11-28 00:09:43.328459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.329544] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:28.736 [2024-11-28 00:09:43.331540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.331571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:28.736 [2024-11-28 00:09:43.331586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:15:28.736 [2024-11-28 00:09:43.331593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.331645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.331654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:28.736 [2024-11-28 00:09:43.331662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:28.736 [2024-11-28 00:09:43.331675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.336102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.336131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:28.736 [2024-11-28 00:09:43.336139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.391 ms 00:15:28.736 [2024-11-28 00:09:43.336146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.336227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.336235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:28.736 [2024-11-28 00:09:43.336246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:28.736 [2024-11-28 00:09:43.336252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.336276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.736 [2024-11-28 00:09:43.336284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:28.736 [2024-11-28 00:09:43.336292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:28.736 [2024-11-28 00:09:43.336298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.736 [2024-11-28 00:09:43.336321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:28.996 [2024-11-28 00:09:43.337587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.996 [2024-11-28 00:09:43.337612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:28.996 [2024-11-28 00:09:43.337621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:15:28.996 [2024-11-28 00:09:43.337628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.996 [2024-11-28 00:09:43.337664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.996 [2024-11-28 00:09:43.337674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:28.996 [2024-11-28 00:09:43.337682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:28.996 [2024-11-28 00:09:43.337688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.996 [2024-11-28 00:09:43.337705] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:28.996 [2024-11-28 00:09:43.337721] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:28.996 [2024-11-28 00:09:43.337753] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:28.996 [2024-11-28 00:09:43.337768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:28.996 [2024-11-28 00:09:43.337838] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:28.996 [2024-11-28 00:09:43.337850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:28.996 [2024-11-28 00:09:43.337864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:28.996 [2024-11-28 00:09:43.337876] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:28.996 [2024-11-28 00:09:43.337885] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:28.996 [2024-11-28 00:09:43.337892] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:28.996 [2024-11-28 00:09:43.337899] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:28.996 [2024-11-28 00:09:43.337908] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:28.996 [2024-11-28 00:09:43.337915] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:28.996 [2024-11-28 00:09:43.337925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.337932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:28.997 [2024-11-28 00:09:43.337940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:15:28.997 [2024-11-28 00:09:43.337950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.338018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.338025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:28.997 [2024-11-28 00:09:43.338032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:28.997 [2024-11-28 00:09:43.338039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.338117] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:28.997 [2024-11-28 00:09:43.338126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:28.997 [2024-11-28 00:09:43.338134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:28.997 [2024-11-28 00:09:43.338154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:28.997 [2024-11-28 00:09:43.338175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:28.997 [2024-11-28 00:09:43.338188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:28.997 [2024-11-28 00:09:43.338199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:28.997 [2024-11-28 00:09:43.338205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:28.997 [2024-11-28 00:09:43.338211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:28.997 [2024-11-28 00:09:43.338218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:28.997 [2024-11-28 00:09:43.338224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:28.997 [2024-11-28 00:09:43.338241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:28.997 [2024-11-28 00:09:43.338249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:28.997 [2024-11-28 00:09:43.338264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:28.997 [2024-11-28 00:09:43.338271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:28.997 [2024-11-28 00:09:43.338286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:28.997 [2024-11-28 00:09:43.338310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:28.997 [2024-11-28 00:09:43.338332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:28.997 [2024-11-28 00:09:43.338358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:28.997 [2024-11-28 00:09:43.338397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:28.997 [2024-11-28 00:09:43.338412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:28.997 [2024-11-28 00:09:43.338419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:28.997 [2024-11-28 00:09:43.338426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:28.997 [2024-11-28 00:09:43.338433] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:28.997 [2024-11-28 00:09:43.338442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:28.997 [2024-11-28 00:09:43.338449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:28.997 [2024-11-28 00:09:43.338470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:28.997 [2024-11-28 00:09:43.338477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:28.997 [2024-11-28 00:09:43.338485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:28.997 [2024-11-28 00:09:43.338493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:28.997 [2024-11-28 00:09:43.338502] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:28.997 [2024-11-28 00:09:43.338509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:28.997 [2024-11-28 00:09:43.338518] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:28.997 [2024-11-28 00:09:43.338528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:28.997 [2024-11-28 00:09:43.338542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:28.997 [2024-11-28 00:09:43.338550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:28.997 [2024-11-28 00:09:43.338559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:28.997 [2024-11-28 00:09:43.338567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:28.997 [2024-11-28 00:09:43.338582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:28.997 [2024-11-28 00:09:43.338591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:28.997 [2024-11-28 00:09:43.338599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:28.997 [2024-11-28 00:09:43.338607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:28.997 [2024-11-28 00:09:43.338615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:28.997 [2024-11-28 00:09:43.338622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:28.997 [2024-11-28 00:09:43.338629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:28.997 [2024-11-28 00:09:43.338636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:28.997 [2024-11-28 00:09:43.338645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:28.997 [2024-11-28 00:09:43.338652] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:28.997 [2024-11-28 00:09:43.338660] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:28.997 [2024-11-28 00:09:43.338667] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:28.997 [2024-11-28 00:09:43.338674] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:28.997 [2024-11-28 00:09:43.338682] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:28.997 [2024-11-28 00:09:43.338689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:28.997 [2024-11-28 00:09:43.338696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.338703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:28.997 [2024-11-28 00:09:43.338710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:15:28.997 [2024-11-28 00:09:43.338717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.344323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.344354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:28.997 [2024-11-28 00:09:43.344379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.567 ms 00:15:28.997 [2024-11-28 00:09:43.344391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.344502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.344516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:28.997 [2024-11-28 00:09:43.344524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:28.997 [2024-11-28 00:09:43.344531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.361149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.361190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:28.997 [2024-11-28 00:09:43.361212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.596 ms 00:15:28.997 [2024-11-28 00:09:43.361223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.361293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.361304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:28.997 [2024-11-28 00:09:43.361314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:28.997 [2024-11-28 00:09:43.361326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.997 [2024-11-28 00:09:43.361665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.997 [2024-11-28 00:09:43.361680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:28.998 [2024-11-28 00:09:43.361690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:15:28.998 [2024-11-28 00:09:43.361706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.361831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.361849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:28.998 [2024-11-28 00:09:43.361857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:28.998 [2024-11-28 00:09:43.361869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.366993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.367124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:28.998 [2024-11-28 00:09:43.367140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.100 ms 00:15:28.998 [2024-11-28 00:09:43.367148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.369254] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:28.998 [2024-11-28 00:09:43.369287] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:28.998 [2024-11-28 00:09:43.369297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.369305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:28.998 [2024-11-28 00:09:43.369313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:15:28.998 [2024-11-28 00:09:43.369325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.383833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.383863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:28.998 [2024-11-28 00:09:43.383874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.438 ms 00:15:28.998 [2024-11-28 00:09:43.383882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.385560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.385674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:28.998 [2024-11-28 00:09:43.385687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:15:28.998 [2024-11-28 00:09:43.385694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.387025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.387053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:28.998 [2024-11-28 00:09:43.387061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:15:28.998 [2024-11-28 00:09:43.387067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.387257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.387271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:28.998 [2024-11-28 00:09:43.387279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:15:28.998 [2024-11-28 00:09:43.387290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.404166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.404221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:28.998 [2024-11-28 00:09:43.404233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.857 ms 00:15:28.998 [2024-11-28 00:09:43.404241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.411655] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:28.998 [2024-11-28 00:09:43.425114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.425153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:28.998 [2024-11-28 00:09:43.425164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.792 ms 00:15:28.998 [2024-11-28 00:09:43.425172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.425241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.425250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:28.998 [2024-11-28 00:09:43.425261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:28.998 [2024-11-28 00:09:43.425269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.425314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.425322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:28.998 [2024-11-28 00:09:43.425333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:28.998 [2024-11-28 00:09:43.425340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.426534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.426659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:28.998 [2024-11-28 00:09:43.426673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:15:28.998 [2024-11-28 00:09:43.426684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.426718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.426730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:28.998 [2024-11-28 00:09:43.426737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:28.998 [2024-11-28 00:09:43.426747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.426776] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:28.998 [2024-11-28 00:09:43.426785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.426792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:28.998 [2024-11-28 00:09:43.426800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:28.998 [2024-11-28 00:09:43.426806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.430092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.430199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:28.998 [2024-11-28 00:09:43.430212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:15:28.998 [2024-11-28 00:09:43.430220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.430285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.998 [2024-11-28 00:09:43.430294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:28.998 [2024-11-28 00:09:43.430302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:28.998 [2024-11-28 00:09:43.430310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.998 [2024-11-28 00:09:43.431096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:28.998 [2024-11-28 00:09:43.432064] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.299 ms, result 0 00:15:28.998 [2024-11-28 00:09:43.432749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:28.998 [2024-11-28 00:09:43.442104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:29.934  [2024-11-28T00:09:45.471Z] Copying: 42/256 [MB] (42 MBps) [2024-11-28T00:09:46.847Z] Copying: 85/256 [MB] (43 MBps) [2024-11-28T00:09:47.782Z] Copying: 128/256 [MB] (43 MBps) [2024-11-28T00:09:48.719Z] Copying: 174/256 [MB] (45 MBps) [2024-11-28T00:09:49.657Z] Copying: 218/256 [MB] (43 MBps) [2024-11-28T00:09:49.657Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-28 00:09:49.325505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:35.055 [2024-11-28 00:09:49.326583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.326611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:35.055 [2024-11-28 00:09:49.326624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:35.055 [2024-11-28 00:09:49.326636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.326662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:35.055 [2024-11-28 00:09:49.327055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.327085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:35.055 [2024-11-28 00:09:49.327095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:15:35.055 [2024-11-28 00:09:49.327102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.328561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.328686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:35.055 [2024-11-28 00:09:49.328701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:15:35.055 [2024-11-28 00:09:49.328708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.334545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.334577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:35.055 [2024-11-28 00:09:49.334586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.818 ms 00:15:35.055 [2024-11-28 00:09:49.334594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.341478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.341506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:35.055 [2024-11-28 00:09:49.341515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.833 ms 00:15:35.055 [2024-11-28 00:09:49.341522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.342805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.342834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:35.055 [2024-11-28 00:09:49.342843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:15:35.055 [2024-11-28 00:09:49.342850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.346257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.346296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:35.055 [2024-11-28 00:09:49.346305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:15:35.055 [2024-11-28 00:09:49.346312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.346451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.346461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:35.055 [2024-11-28 00:09:49.346470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:15:35.055 [2024-11-28 00:09:49.346480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.348422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.348449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:35.055 [2024-11-28 00:09:49.348457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:15:35.055 [2024-11-28 00:09:49.348464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.349853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.349881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:35.055 [2024-11-28 00:09:49.349889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:15:35.055 [2024-11-28 00:09:49.349896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.055 [2024-11-28 00:09:49.350751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.055 [2024-11-28 00:09:49.350778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:35.055 [2024-11-28 00:09:49.350786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:15:35.055 [2024-11-28 00:09:49.350792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.056 [2024-11-28 00:09:49.351715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.056 [2024-11-28 00:09:49.351828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:35.056 [2024-11-28 00:09:49.351841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:15:35.056 [2024-11-28 00:09:49.351848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.056 [2024-11-28 00:09:49.351876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:35.056 [2024-11-28 00:09:49.351894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.351997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:35.056 [2024-11-28 00:09:49.352517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:35.057 [2024-11-28 00:09:49.352641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:35.057 [2024-11-28 00:09:49.352648] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:35.057 [2024-11-28 00:09:49.352655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:35.057 [2024-11-28 00:09:49.352661] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:35.057 [2024-11-28 00:09:49.352668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:35.057 [2024-11-28 00:09:49.352676] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:35.057 [2024-11-28 00:09:49.352682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:35.057 [2024-11-28 00:09:49.352694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:35.057 [2024-11-28 00:09:49.352700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:35.057 [2024-11-28 00:09:49.352707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:35.057 [2024-11-28 00:09:49.352712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:35.057 [2024-11-28 00:09:49.352719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.057 [2024-11-28 00:09:49.352726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:35.057 [2024-11-28 00:09:49.352734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:15:35.057 [2024-11-28 00:09:49.352743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.354052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.057 [2024-11-28 00:09:49.354072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:35.057 [2024-11-28 00:09:49.354079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:15:35.057 [2024-11-28 00:09:49.354086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.354144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.057 [2024-11-28 00:09:49.354157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:35.057 [2024-11-28 00:09:49.354165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:35.057 [2024-11-28 00:09:49.354171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.358967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.359067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:35.057 [2024-11-28 00:09:49.359118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.359175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.359265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.359299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:35.057 [2024-11-28 00:09:49.359351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.359388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.359502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.359562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:35.057 [2024-11-28 00:09:49.359618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.359640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.359699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.359725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:35.057 [2024-11-28 00:09:49.359753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.359799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.367890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.368031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:35.057 [2024-11-28 00:09:49.368083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.368139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.371619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.371721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:35.057 [2024-11-28 00:09:49.371785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.371807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.371954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.371990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:35.057 [2024-11-28 00:09:49.372053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.372168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:35.057 [2024-11-28 00:09:49.372194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.372406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:35.057 [2024-11-28 00:09:49.372461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.372609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:35.057 [2024-11-28 00:09:49.372663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.372731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:35.057 [2024-11-28 00:09:49.372739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.057 [2024-11-28 00:09:49.372795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:35.057 [2024-11-28 00:09:49.372803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.057 [2024-11-28 00:09:49.372810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.057 [2024-11-28 00:09:49.372936] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.331 ms, result 0 00:15:35.317 00:15:35.317 00:15:35.317 00:09:49 -- ftl/trim.sh@72 -- # svcpid=83430 00:15:35.317 00:09:49 -- ftl/trim.sh@73 -- # waitforlisten 83430 00:15:35.317 00:09:49 -- common/autotest_common.sh@829 -- # '[' -z 83430 ']' 00:15:35.317 00:09:49 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:35.317 00:09:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.317 00:09:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.317 00:09:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.317 00:09:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.317 00:09:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.576 [2024-11-28 00:09:49.926819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.576 [2024-11-28 00:09:49.926925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83430 ] 00:15:35.576 [2024-11-28 00:09:50.072109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.576 [2024-11-28 00:09:50.102234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.576 [2024-11-28 00:09:50.102457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.144 00:09:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.144 00:09:50 -- common/autotest_common.sh@862 -- # return 0 00:15:36.144 00:09:50 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:36.403 [2024-11-28 00:09:50.930668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:36.403 [2024-11-28 00:09:50.930725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:36.664 [2024-11-28 00:09:51.091744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.091789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:36.664 [2024-11-28 00:09:51.091804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:36.664 [2024-11-28 00:09:51.091812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.093967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.094001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:36.664 [2024-11-28 00:09:51.094012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:15:36.664 [2024-11-28 00:09:51.094020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.094088] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:36.664 [2024-11-28 00:09:51.094306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:36.664 [2024-11-28 00:09:51.094321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.094329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:36.664 [2024-11-28 00:09:51.094340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:15:36.664 [2024-11-28 00:09:51.094347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.095404] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:36.664 [2024-11-28 00:09:51.097303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.097343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:36.664 [2024-11-28 00:09:51.097352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:15:36.664 [2024-11-28 00:09:51.097361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.097420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.097433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:36.664 [2024-11-28 00:09:51.097443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:36.664 [2024-11-28 00:09:51.097452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.101877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.101916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:36.664 [2024-11-28 00:09:51.101925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:15:36.664 [2024-11-28 00:09:51.101936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.102026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.102039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:36.664 [2024-11-28 00:09:51.102046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:36.664 [2024-11-28 00:09:51.102056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.102082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.102091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:36.664 [2024-11-28 00:09:51.102098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:36.664 [2024-11-28 00:09:51.102107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.102132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:36.664 [2024-11-28 00:09:51.103410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.103434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:36.664 [2024-11-28 00:09:51.103444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:15:36.664 [2024-11-28 00:09:51.103451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.103486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.664 [2024-11-28 00:09:51.103497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:36.664 [2024-11-28 00:09:51.103507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:36.664 [2024-11-28 00:09:51.103513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.664 [2024-11-28 00:09:51.103534] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:36.664 [2024-11-28 00:09:51.103551] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:36.664 [2024-11-28 00:09:51.103591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:36.664 [2024-11-28 00:09:51.103610] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:36.664 [2024-11-28 00:09:51.103684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:36.664 [2024-11-28 00:09:51.103694] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:36.664 [2024-11-28 00:09:51.103704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:36.665 [2024-11-28 00:09:51.103714] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:36.665 [2024-11-28 00:09:51.103726] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:36.665 [2024-11-28 00:09:51.103736] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:36.665 [2024-11-28 00:09:51.103747] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:36.665 [2024-11-28 00:09:51.103756] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:36.665 [2024-11-28 00:09:51.103765] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:36.665 [2024-11-28 00:09:51.103772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.665 [2024-11-28 00:09:51.103783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:36.665 [2024-11-28 00:09:51.103791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:15:36.665 [2024-11-28 00:09:51.103800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.665 [2024-11-28 00:09:51.103862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.665 [2024-11-28 00:09:51.103876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:36.665 [2024-11-28 00:09:51.103883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:36.665 [2024-11-28 00:09:51.103893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.665 [2024-11-28 00:09:51.103966] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:36.665 [2024-11-28 00:09:51.103976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:36.665 [2024-11-28 00:09:51.103984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:36.665 [2024-11-28 00:09:51.103994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:36.665 [2024-11-28 00:09:51.104009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:36.665 [2024-11-28 00:09:51.104032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:36.665 [2024-11-28 00:09:51.104046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:36.665 [2024-11-28 00:09:51.104054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:36.665 [2024-11-28 00:09:51.104062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:36.665 [2024-11-28 00:09:51.104078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:36.665 [2024-11-28 00:09:51.104086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:36.665 [2024-11-28 00:09:51.104095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:36.665 [2024-11-28 00:09:51.104112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:36.665 [2024-11-28 00:09:51.104121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:36.665 [2024-11-28 00:09:51.104140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:36.665 [2024-11-28 00:09:51.104150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:36.665 [2024-11-28 00:09:51.104171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:36.665 [2024-11-28 00:09:51.104195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:36.665 [2024-11-28 00:09:51.104221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:36.665 [2024-11-28 00:09:51.104245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:36.665 [2024-11-28 00:09:51.104271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:36.665 [2024-11-28 00:09:51.104287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:36.665 [2024-11-28 00:09:51.104295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:36.665 [2024-11-28 00:09:51.104304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:36.665 [2024-11-28 00:09:51.104311] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:36.665 [2024-11-28 00:09:51.104320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:36.665 [2024-11-28 00:09:51.104329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:36.665 [2024-11-28 00:09:51.104349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:36.665 [2024-11-28 00:09:51.104376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:36.665 [2024-11-28 00:09:51.104384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:36.665 [2024-11-28 00:09:51.104394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:36.665 [2024-11-28 00:09:51.104401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:36.665 [2024-11-28 00:09:51.104412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:36.665 [2024-11-28 00:09:51.104420] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:36.665 [2024-11-28 00:09:51.104435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:36.665 [2024-11-28 00:09:51.104443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:36.665 [2024-11-28 00:09:51.104453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:36.665 [2024-11-28 00:09:51.104460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:36.665 [2024-11-28 00:09:51.104468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:36.665 [2024-11-28 00:09:51.104475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:36.665 [2024-11-28 00:09:51.104483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:36.665 [2024-11-28 00:09:51.104491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:36.665 [2024-11-28 00:09:51.104499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:36.665 [2024-11-28 00:09:51.104506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:36.665 [2024-11-28 00:09:51.104515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:36.665 [2024-11-28 00:09:51.104522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:36.665 [2024-11-28 00:09:51.104530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:36.665 [2024-11-28 00:09:51.104538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:36.665 [2024-11-28 00:09:51.104546] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:36.665 [2024-11-28 00:09:51.104554] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:36.665 [2024-11-28 00:09:51.104564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:36.665 [2024-11-28 00:09:51.104571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:36.665 [2024-11-28 00:09:51.104581] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:36.665 [2024-11-28 00:09:51.104588] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:36.665 [2024-11-28 00:09:51.104597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.665 [2024-11-28 00:09:51.104604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:36.665 [2024-11-28 00:09:51.104613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:15:36.665 [2024-11-28 00:09:51.104619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.665 [2024-11-28 00:09:51.110225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.665 [2024-11-28 00:09:51.110258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.665 [2024-11-28 00:09:51.110272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.559 ms 00:15:36.665 [2024-11-28 00:09:51.110283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.665 [2024-11-28 00:09:51.110412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.665 [2024-11-28 00:09:51.110423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:36.665 [2024-11-28 00:09:51.110433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:36.665 [2024-11-28 00:09:51.110440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.119096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.119128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.666 [2024-11-28 00:09:51.119139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.634 ms 00:15:36.666 [2024-11-28 00:09:51.119147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.119195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.119204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.666 [2024-11-28 00:09:51.119217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:36.666 [2024-11-28 00:09:51.119225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.119572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.119592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.666 [2024-11-28 00:09:51.119604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:15:36.666 [2024-11-28 00:09:51.119612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.119723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.119735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.666 [2024-11-28 00:09:51.119746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:36.666 [2024-11-28 00:09:51.119753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.124730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.124759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.666 [2024-11-28 00:09:51.124770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.951 ms 00:15:36.666 [2024-11-28 00:09:51.124779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.126929] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:36.666 [2024-11-28 00:09:51.126960] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:36.666 [2024-11-28 00:09:51.126971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.126983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:36.666 [2024-11-28 00:09:51.126993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:15:36.666 [2024-11-28 00:09:51.127000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.141255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.141289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:36.666 [2024-11-28 00:09:51.141301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.212 ms 00:15:36.666 [2024-11-28 00:09:51.141310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.142996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.143025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:36.666 [2024-11-28 00:09:51.143037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:15:36.666 [2024-11-28 00:09:51.143044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.144352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.144388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:36.666 [2024-11-28 00:09:51.144398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:15:36.666 [2024-11-28 00:09:51.144405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.144601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.144615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:36.666 [2024-11-28 00:09:51.144628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:15:36.666 [2024-11-28 00:09:51.144638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.161264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.161307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:36.666 [2024-11-28 00:09:51.161319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.603 ms 00:15:36.666 [2024-11-28 00:09:51.161328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.169098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:36.666 [2024-11-28 00:09:51.182270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.182310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:36.666 [2024-11-28 00:09:51.182321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.852 ms 00:15:36.666 [2024-11-28 00:09:51.182333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.182412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.182425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:36.666 [2024-11-28 00:09:51.182433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:36.666 [2024-11-28 00:09:51.182442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.182491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.182501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:36.666 [2024-11-28 00:09:51.182514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:36.666 [2024-11-28 00:09:51.182523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.183654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.183791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:36.666 [2024-11-28 00:09:51.183807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:15:36.666 [2024-11-28 00:09:51.183815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.183853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.183865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:36.666 [2024-11-28 00:09:51.183872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:36.666 [2024-11-28 00:09:51.183881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.183911] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:36.666 [2024-11-28 00:09:51.183922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.183929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:36.666 [2024-11-28 00:09:51.183940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:36.666 [2024-11-28 00:09:51.183947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.187283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.187409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:36.666 [2024-11-28 00:09:51.187428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:15:36.666 [2024-11-28 00:09:51.187435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.187502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.666 [2024-11-28 00:09:51.187511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:36.666 [2024-11-28 00:09:51.187522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:36.666 [2024-11-28 00:09:51.187529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.666 [2024-11-28 00:09:51.188278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:36.666 [2024-11-28 00:09:51.189245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.279 ms, result 0 00:15:36.666 [2024-11-28 00:09:51.189984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:36.666 Some configs were skipped because the RPC state that can call them passed over. 00:15:36.666 00:09:51 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:36.926 [2024-11-28 00:09:51.403035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.926 [2024-11-28 00:09:51.403184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:36.926 [2024-11-28 00:09:51.403244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:15:36.926 [2024-11-28 00:09:51.403271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.926 [2024-11-28 00:09:51.403385] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.384 ms, result 0 00:15:36.926 true 00:15:36.926 00:09:51 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:37.186 [2024-11-28 00:09:51.591122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.186 [2024-11-28 00:09:51.591262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:37.186 [2024-11-28 00:09:51.591321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:15:37.186 [2024-11-28 00:09:51.591344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.186 [2024-11-28 00:09:51.591447] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.030 ms, result 0 00:15:37.186 true 00:15:37.186 00:09:51 -- ftl/trim.sh@81 -- # killprocess 83430 00:15:37.186 00:09:51 -- common/autotest_common.sh@936 -- # '[' -z 83430 ']' 00:15:37.186 00:09:51 -- common/autotest_common.sh@940 -- # kill -0 83430 00:15:37.186 00:09:51 -- common/autotest_common.sh@941 -- # uname 00:15:37.186 00:09:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.187 00:09:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83430 00:15:37.187 killing process with pid 83430 00:15:37.187 00:09:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.187 00:09:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.187 00:09:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83430' 00:15:37.187 00:09:51 -- common/autotest_common.sh@955 -- # kill 83430 00:15:37.187 00:09:51 -- common/autotest_common.sh@960 -- # wait 83430 00:15:37.187 [2024-11-28 00:09:51.725738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.725794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:37.187 [2024-11-28 00:09:51.725806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:37.187 [2024-11-28 00:09:51.725817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.725839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:37.187 [2024-11-28 00:09:51.726258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.726279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:37.187 [2024-11-28 00:09:51.726289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:15:37.187 [2024-11-28 00:09:51.726297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.726604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.726619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:37.187 [2024-11-28 00:09:51.726630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:15:37.187 [2024-11-28 00:09:51.726640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.730859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.730889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:37.187 [2024-11-28 00:09:51.730900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:15:37.187 [2024-11-28 00:09:51.730907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.737959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.737992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:37.187 [2024-11-28 00:09:51.738004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.015 ms 00:15:37.187 [2024-11-28 00:09:51.738013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.739395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.739422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:37.187 [2024-11-28 00:09:51.739432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:15:37.187 [2024-11-28 00:09:51.739439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.742776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.742807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:37.187 [2024-11-28 00:09:51.742819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:15:37.187 [2024-11-28 00:09:51.742826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.742951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.742960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:37.187 [2024-11-28 00:09:51.742969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:37.187 [2024-11-28 00:09:51.742982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.744700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.744827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:37.187 [2024-11-28 00:09:51.744848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:15:37.187 [2024-11-28 00:09:51.744855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.746151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.746177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:37.187 [2024-11-28 00:09:51.746188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:15:37.187 [2024-11-28 00:09:51.746194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.747092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.747122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:37.187 [2024-11-28 00:09:51.747132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:15:37.187 [2024-11-28 00:09:51.747139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.748409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.187 [2024-11-28 00:09:51.748434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:37.187 [2024-11-28 00:09:51.748444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:15:37.187 [2024-11-28 00:09:51.748451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.187 [2024-11-28 00:09:51.748484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:37.187 [2024-11-28 00:09:51.748498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:37.187 [2024-11-28 00:09:51.748853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.748996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:37.188 [2024-11-28 00:09:51.749356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:37.188 [2024-11-28 00:09:51.749377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:37.188 [2024-11-28 00:09:51.749385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:37.188 [2024-11-28 00:09:51.749393] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:37.188 [2024-11-28 00:09:51.749400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:37.188 [2024-11-28 00:09:51.749409] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:37.188 [2024-11-28 00:09:51.749416] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:37.188 [2024-11-28 00:09:51.749425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:37.188 [2024-11-28 00:09:51.749434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:37.188 [2024-11-28 00:09:51.749442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:37.188 [2024-11-28 00:09:51.749448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:37.188 [2024-11-28 00:09:51.749456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.188 [2024-11-28 00:09:51.749463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:37.188 [2024-11-28 00:09:51.749475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:15:37.188 [2024-11-28 00:09:51.749482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.750787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.188 [2024-11-28 00:09:51.750806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:37.188 [2024-11-28 00:09:51.750819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:15:37.188 [2024-11-28 00:09:51.750826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.750888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.188 [2024-11-28 00:09:51.750897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:37.188 [2024-11-28 00:09:51.750906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:37.188 [2024-11-28 00:09:51.750912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.755951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.188 [2024-11-28 00:09:51.756056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:37.188 [2024-11-28 00:09:51.756154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.188 [2024-11-28 00:09:51.756179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.756255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.188 [2024-11-28 00:09:51.756412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:37.188 [2024-11-28 00:09:51.756441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.188 [2024-11-28 00:09:51.756460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.756516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.188 [2024-11-28 00:09:51.756590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:37.188 [2024-11-28 00:09:51.756612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.188 [2024-11-28 00:09:51.756631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.756697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.188 [2024-11-28 00:09:51.756721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:37.188 [2024-11-28 00:09:51.756791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.188 [2024-11-28 00:09:51.756813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.188 [2024-11-28 00:09:51.765808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.765949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:37.189 [2024-11-28 00:09:51.766002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.766031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.769539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.769643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:37.189 [2024-11-28 00:09:51.769696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.769718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.769757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.769862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:37.189 [2024-11-28 00:09:51.769888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.769907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.769950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.770050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:37.189 [2024-11-28 00:09:51.770076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.770095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.770182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.770214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:37.189 [2024-11-28 00:09:51.770236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.770254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.770333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.770406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:37.189 [2024-11-28 00:09:51.770465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.770513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.770567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.770624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:37.189 [2024-11-28 00:09:51.770649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.770690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.770750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.189 [2024-11-28 00:09:51.770857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:37.189 [2024-11-28 00:09:51.770883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.189 [2024-11-28 00:09:51.770901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.189 [2024-11-28 00:09:51.771044] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.285 ms, result 0 00:15:37.448 00:09:51 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:37.448 00:09:51 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:37.448 [2024-11-28 00:09:51.999385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.448 [2024-11-28 00:09:51.999493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83468 ] 00:15:37.709 [2024-11-28 00:09:52.146430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.709 [2024-11-28 00:09:52.175583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.709 [2024-11-28 00:09:52.258554] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:37.709 [2024-11-28 00:09:52.258621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:37.969 [2024-11-28 00:09:52.403983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.404040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:37.969 [2024-11-28 00:09:52.404054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:37.969 [2024-11-28 00:09:52.404064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.406309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.406490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:37.969 [2024-11-28 00:09:52.406508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:15:37.969 [2024-11-28 00:09:52.406516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.406583] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:37.969 [2024-11-28 00:09:52.406817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:37.969 [2024-11-28 00:09:52.406840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.406849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:37.969 [2024-11-28 00:09:52.406857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:15:37.969 [2024-11-28 00:09:52.406864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.407953] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:37.969 [2024-11-28 00:09:52.410094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.410211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:37.969 [2024-11-28 00:09:52.410226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:15:37.969 [2024-11-28 00:09:52.410234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.410288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.410303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:37.969 [2024-11-28 00:09:52.410311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:37.969 [2024-11-28 00:09:52.410320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.414929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.415018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:37.969 [2024-11-28 00:09:52.415071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:15:37.969 [2024-11-28 00:09:52.415094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.415226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.415289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:37.969 [2024-11-28 00:09:52.415344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:37.969 [2024-11-28 00:09:52.415377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.415460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.415488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:37.969 [2024-11-28 00:09:52.415546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:37.969 [2024-11-28 00:09:52.415572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.415616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:37.969 [2024-11-28 00:09:52.416939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.417028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:37.969 [2024-11-28 00:09:52.417107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:15:37.969 [2024-11-28 00:09:52.417207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.969 [2024-11-28 00:09:52.417263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.969 [2024-11-28 00:09:52.417294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:37.969 [2024-11-28 00:09:52.417342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:37.969 [2024-11-28 00:09:52.417374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.970 [2024-11-28 00:09:52.417409] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:37.970 [2024-11-28 00:09:52.417512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:37.970 [2024-11-28 00:09:52.417575] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:37.970 [2024-11-28 00:09:52.417647] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:37.970 [2024-11-28 00:09:52.417773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:37.970 [2024-11-28 00:09:52.417842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:37.970 [2024-11-28 00:09:52.417897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:37.970 [2024-11-28 00:09:52.417929] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:37.970 [2024-11-28 00:09:52.418037] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:37.970 [2024-11-28 00:09:52.418072] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:37.970 [2024-11-28 00:09:52.418090] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:37.970 [2024-11-28 00:09:52.418161] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:37.970 [2024-11-28 00:09:52.418182] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:37.970 [2024-11-28 00:09:52.418206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.970 [2024-11-28 00:09:52.418229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:37.970 [2024-11-28 00:09:52.418305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:15:37.970 [2024-11-28 00:09:52.418326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.970 [2024-11-28 00:09:52.418431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.970 [2024-11-28 00:09:52.418456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:37.970 [2024-11-28 00:09:52.418480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:37.970 [2024-11-28 00:09:52.418528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.970 [2024-11-28 00:09:52.418655] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:37.970 [2024-11-28 00:09:52.418713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:37.970 [2024-11-28 00:09:52.418736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:37.970 [2024-11-28 00:09:52.418776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:37.970 [2024-11-28 00:09:52.418798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:37.970 [2024-11-28 00:09:52.418816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:37.970 [2024-11-28 00:09:52.418909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:37.970 [2024-11-28 00:09:52.418930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:37.970 [2024-11-28 00:09:52.418948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:37.970 [2024-11-28 00:09:52.418966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:37.970 [2024-11-28 00:09:52.419021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:37.970 [2024-11-28 00:09:52.419049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:37.970 [2024-11-28 00:09:52.419067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:37.970 [2024-11-28 00:09:52.419085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:37.970 [2024-11-28 00:09:52.419130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:37.970 [2024-11-28 00:09:52.419154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:37.970 [2024-11-28 00:09:52.419190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:37.970 [2024-11-28 00:09:52.419229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:37.970 [2024-11-28 00:09:52.419290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:37.970 [2024-11-28 00:09:52.419336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:37.970 [2024-11-28 00:09:52.419358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:37.970 [2024-11-28 00:09:52.419393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:37.970 [2024-11-28 00:09:52.419503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:37.970 [2024-11-28 00:09:52.419521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:37.970 [2024-11-28 00:09:52.419606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:37.970 [2024-11-28 00:09:52.419626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:37.970 [2024-11-28 00:09:52.419666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:37.970 [2024-11-28 00:09:52.419748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:37.970 [2024-11-28 00:09:52.419788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:37.970 [2024-11-28 00:09:52.419836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:37.970 [2024-11-28 00:09:52.419858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:37.970 [2024-11-28 00:09:52.419875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:37.970 [2024-11-28 00:09:52.419914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:37.970 [2024-11-28 00:09:52.419954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:37.970 [2024-11-28 00:09:52.419973] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:37.970 [2024-11-28 00:09:52.420026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:37.970 [2024-11-28 00:09:52.420048] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:37.970 [2024-11-28 00:09:52.420066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:37.970 [2024-11-28 00:09:52.420155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:37.970 [2024-11-28 00:09:52.420177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:37.970 [2024-11-28 00:09:52.420195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:37.970 [2024-11-28 00:09:52.420245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:37.970 [2024-11-28 00:09:52.420266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:37.970 [2024-11-28 00:09:52.420285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:37.970 [2024-11-28 00:09:52.420325] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:37.970 [2024-11-28 00:09:52.420358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:37.970 [2024-11-28 00:09:52.420407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:37.970 [2024-11-28 00:09:52.420459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:37.970 [2024-11-28 00:09:52.420515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:37.970 [2024-11-28 00:09:52.420548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:37.970 [2024-11-28 00:09:52.420575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:37.970 [2024-11-28 00:09:52.420603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:37.970 [2024-11-28 00:09:52.420630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:37.970 [2024-11-28 00:09:52.420688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:37.970 [2024-11-28 00:09:52.420719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:37.970 [2024-11-28 00:09:52.420822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:37.970 [2024-11-28 00:09:52.420851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:37.970 [2024-11-28 00:09:52.420881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:37.970 [2024-11-28 00:09:52.420911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:37.970 [2024-11-28 00:09:52.420939] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:37.970 [2024-11-28 00:09:52.420972] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:37.970 [2024-11-28 00:09:52.421034] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:37.970 [2024-11-28 00:09:52.421064] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:37.970 [2024-11-28 00:09:52.421091] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:37.970 [2024-11-28 00:09:52.421128] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:37.970 [2024-11-28 00:09:52.421157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.970 [2024-11-28 00:09:52.421176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:37.970 [2024-11-28 00:09:52.421200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.545 ms 00:15:37.970 [2024-11-28 00:09:52.421217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.970 [2024-11-28 00:09:52.427072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.970 [2024-11-28 00:09:52.427197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:37.971 [2024-11-28 00:09:52.427220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:15:37.971 [2024-11-28 00:09:52.427234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.427352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.427381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:37.971 [2024-11-28 00:09:52.427392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:15:37.971 [2024-11-28 00:09:52.427401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.458852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.458898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:37.971 [2024-11-28 00:09:52.458914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.422 ms 00:15:37.971 [2024-11-28 00:09:52.458924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.459011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.459022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:37.971 [2024-11-28 00:09:52.459035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:37.971 [2024-11-28 00:09:52.459043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.459355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.459387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:37.971 [2024-11-28 00:09:52.459396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:15:37.971 [2024-11-28 00:09:52.459407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.459527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.459587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:37.971 [2024-11-28 00:09:52.459599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:15:37.971 [2024-11-28 00:09:52.459608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.464696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.464730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:37.971 [2024-11-28 00:09:52.464739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:15:37.971 [2024-11-28 00:09:52.464747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.466893] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:37.971 [2024-11-28 00:09:52.467011] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:37.971 [2024-11-28 00:09:52.467023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.467031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:37.971 [2024-11-28 00:09:52.467046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:15:37.971 [2024-11-28 00:09:52.467053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.481455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.481552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:37.971 [2024-11-28 00:09:52.481605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.356 ms 00:15:37.971 [2024-11-28 00:09:52.481627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.483398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.483493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:37.971 [2024-11-28 00:09:52.483546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:15:37.971 [2024-11-28 00:09:52.483567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.484864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.484955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:37.971 [2024-11-28 00:09:52.485008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:15:37.971 [2024-11-28 00:09:52.485030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.485248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.485378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:37.971 [2024-11-28 00:09:52.485439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:37.971 [2024-11-28 00:09:52.485496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.502541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.502685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:37.971 [2024-11-28 00:09:52.502741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.002 ms 00:15:37.971 [2024-11-28 00:09:52.502765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.510599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:37.971 [2024-11-28 00:09:52.524558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.524704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:37.971 [2024-11-28 00:09:52.524761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.424 ms 00:15:37.971 [2024-11-28 00:09:52.524805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.524904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.524958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:37.971 [2024-11-28 00:09:52.525013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:37.971 [2024-11-28 00:09:52.525034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.525104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.525128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:37.971 [2024-11-28 00:09:52.525147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:37.971 [2024-11-28 00:09:52.525171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.526315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.526423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:37.971 [2024-11-28 00:09:52.526484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:15:37.971 [2024-11-28 00:09:52.526505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.526548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.526650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:37.971 [2024-11-28 00:09:52.526679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:37.971 [2024-11-28 00:09:52.526697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.526745] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:37.971 [2024-11-28 00:09:52.526867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.526895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:37.971 [2024-11-28 00:09:52.526915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:37.971 [2024-11-28 00:09:52.526971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.530285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.530406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:37.971 [2024-11-28 00:09:52.530460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:15:37.971 [2024-11-28 00:09:52.530482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.530585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.971 [2024-11-28 00:09:52.530612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:37.971 [2024-11-28 00:09:52.530656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:37.971 [2024-11-28 00:09:52.530677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.971 [2024-11-28 00:09:52.531451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:37.971 [2024-11-28 00:09:52.532432] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.196 ms, result 0 00:15:37.971 [2024-11-28 00:09:52.532935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:37.971 [2024-11-28 00:09:52.542293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:39.348  [2024-11-28T00:09:54.920Z] Copying: 29/256 [MB] (29 MBps) [2024-11-28T00:09:55.870Z] Copying: 48/256 [MB] (18 MBps) [2024-11-28T00:09:56.804Z] Copying: 62/256 [MB] (14 MBps) [2024-11-28T00:09:57.738Z] Copying: 89/256 [MB] (26 MBps) [2024-11-28T00:09:58.670Z] Copying: 133/256 [MB] (43 MBps) [2024-11-28T00:09:59.606Z] Copying: 176/256 [MB] (43 MBps) [2024-11-28T00:10:00.541Z] Copying: 222/256 [MB] (45 MBps) [2024-11-28T00:10:00.541Z] Copying: 256/256 [MB] (average 33 MBps)[2024-11-28 00:10:00.297290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:45.939 [2024-11-28 00:10:00.298348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.298383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:45.939 [2024-11-28 00:10:00.298400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:45.939 [2024-11-28 00:10:00.298407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.298430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:45.939 [2024-11-28 00:10:00.298805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.298819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:45.939 [2024-11-28 00:10:00.298828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:15:45.939 [2024-11-28 00:10:00.298835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.299091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.299101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:45.939 [2024-11-28 00:10:00.299108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:15:45.939 [2024-11-28 00:10:00.299115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.302824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.302847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:45.939 [2024-11-28 00:10:00.302856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:15:45.939 [2024-11-28 00:10:00.302864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.309718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.309743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:45.939 [2024-11-28 00:10:00.309754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.828 ms 00:15:45.939 [2024-11-28 00:10:00.309761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.311167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.311209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:45.939 [2024-11-28 00:10:00.311218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:15:45.939 [2024-11-28 00:10:00.311225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.314656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.314779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:45.939 [2024-11-28 00:10:00.314794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.412 ms 00:15:45.939 [2024-11-28 00:10:00.314809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.314925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.314934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:45.939 [2024-11-28 00:10:00.314946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:15:45.939 [2024-11-28 00:10:00.314953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.316686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.316713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:45.939 [2024-11-28 00:10:00.316722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:15:45.939 [2024-11-28 00:10:00.316728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.318161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.318268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:45.939 [2024-11-28 00:10:00.318281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:15:45.939 [2024-11-28 00:10:00.318288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.319321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.319344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:45.939 [2024-11-28 00:10:00.319352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:15:45.939 [2024-11-28 00:10:00.319359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.320302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.939 [2024-11-28 00:10:00.320331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:45.939 [2024-11-28 00:10:00.320339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:15:45.939 [2024-11-28 00:10:00.320346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.939 [2024-11-28 00:10:00.320384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:45.939 [2024-11-28 00:10:00.320397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:45.939 [2024-11-28 00:10:00.320406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:45.939 [2024-11-28 00:10:00.320413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:45.939 [2024-11-28 00:10:00.320421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:45.940 [2024-11-28 00:10:00.320906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.320998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:45.941 [2024-11-28 00:10:00.321131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:45.941 [2024-11-28 00:10:00.321139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:45.941 [2024-11-28 00:10:00.321147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:45.941 [2024-11-28 00:10:00.321154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:45.941 [2024-11-28 00:10:00.321161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:45.941 [2024-11-28 00:10:00.321177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:45.941 [2024-11-28 00:10:00.321187] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:45.941 [2024-11-28 00:10:00.321195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:45.941 [2024-11-28 00:10:00.321202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:45.941 [2024-11-28 00:10:00.321208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:45.941 [2024-11-28 00:10:00.321214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:45.941 [2024-11-28 00:10:00.321221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.941 [2024-11-28 00:10:00.321228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:45.941 [2024-11-28 00:10:00.321239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:15:45.941 [2024-11-28 00:10:00.321246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.322498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.941 [2024-11-28 00:10:00.322517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:45.941 [2024-11-28 00:10:00.322526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:15:45.941 [2024-11-28 00:10:00.322532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.322592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.941 [2024-11-28 00:10:00.322601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:45.941 [2024-11-28 00:10:00.322613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:45.941 [2024-11-28 00:10:00.322619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.327230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.327262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:45.941 [2024-11-28 00:10:00.327271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.327278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.327340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.327348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:45.941 [2024-11-28 00:10:00.327355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.327373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.327414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.327426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:45.941 [2024-11-28 00:10:00.327434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.327441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.327457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.327466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:45.941 [2024-11-28 00:10:00.327473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.327480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.335407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.335438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:45.941 [2024-11-28 00:10:00.335447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.335455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.338916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.338951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:45.941 [2024-11-28 00:10:00.338960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.338967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.941 [2024-11-28 00:10:00.338989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.941 [2024-11-28 00:10:00.338997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:45.941 [2024-11-28 00:10:00.339005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.941 [2024-11-28 00:10:00.339012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.942 [2024-11-28 00:10:00.339046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:45.942 [2024-11-28 00:10:00.339054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.942 [2024-11-28 00:10:00.339064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.942 [2024-11-28 00:10:00.339134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:45.942 [2024-11-28 00:10:00.339142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.942 [2024-11-28 00:10:00.339149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.942 [2024-11-28 00:10:00.339185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:45.942 [2024-11-28 00:10:00.339193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.942 [2024-11-28 00:10:00.339202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.942 [2024-11-28 00:10:00.339248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:45.942 [2024-11-28 00:10:00.339259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.942 [2024-11-28 00:10:00.339266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.942 [2024-11-28 00:10:00.339315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:45.942 [2024-11-28 00:10:00.339323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.942 [2024-11-28 00:10:00.339332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.942 [2024-11-28 00:10:00.339478] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 41.115 ms, result 0 00:15:45.942 00:15:45.942 00:15:45.942 00:10:00 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:15:45.942 00:10:00 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:46.507 00:10:01 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:46.765 [2024-11-28 00:10:01.120656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.765 [2024-11-28 00:10:01.120763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83565 ] 00:15:46.765 [2024-11-28 00:10:01.267487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.765 [2024-11-28 00:10:01.298907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.025 [2024-11-28 00:10:01.383637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:47.025 [2024-11-28 00:10:01.383705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:47.025 [2024-11-28 00:10:01.532700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.025 [2024-11-28 00:10:01.532888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:47.025 [2024-11-28 00:10:01.532912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.026 [2024-11-28 00:10:01.532920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.535207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.535242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.026 [2024-11-28 00:10:01.535256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:15:47.026 [2024-11-28 00:10:01.535266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.535335] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:47.026 [2024-11-28 00:10:01.535580] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:47.026 [2024-11-28 00:10:01.535593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.535601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.026 [2024-11-28 00:10:01.535609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:15:47.026 [2024-11-28 00:10:01.535616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.536700] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:47.026 [2024-11-28 00:10:01.539258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.539395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:47.026 [2024-11-28 00:10:01.539411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:15:47.026 [2024-11-28 00:10:01.539425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.539716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.539749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:47.026 [2024-11-28 00:10:01.539761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:47.026 [2024-11-28 00:10:01.539775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.544620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.544657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.026 [2024-11-28 00:10:01.544667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.802 ms 00:15:47.026 [2024-11-28 00:10:01.544674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.544771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.544781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.026 [2024-11-28 00:10:01.544795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:47.026 [2024-11-28 00:10:01.544801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.544826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.544838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:47.026 [2024-11-28 00:10:01.544845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:47.026 [2024-11-28 00:10:01.544854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.544877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:47.026 [2024-11-28 00:10:01.546219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.546248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.026 [2024-11-28 00:10:01.546257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:15:47.026 [2024-11-28 00:10:01.546264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.546300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.546310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:47.026 [2024-11-28 00:10:01.546321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:47.026 [2024-11-28 00:10:01.546327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.546344] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:47.026 [2024-11-28 00:10:01.546377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:47.026 [2024-11-28 00:10:01.546411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:47.026 [2024-11-28 00:10:01.546443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:47.026 [2024-11-28 00:10:01.546517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:47.026 [2024-11-28 00:10:01.546528] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:47.026 [2024-11-28 00:10:01.546541] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:47.026 [2024-11-28 00:10:01.546551] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546562] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546573] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:47.026 [2024-11-28 00:10:01.546579] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:47.026 [2024-11-28 00:10:01.546590] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:47.026 [2024-11-28 00:10:01.546597] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:47.026 [2024-11-28 00:10:01.546604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.546613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:47.026 [2024-11-28 00:10:01.546620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:15:47.026 [2024-11-28 00:10:01.546628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.546691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.026 [2024-11-28 00:10:01.546699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:47.026 [2024-11-28 00:10:01.546706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:47.026 [2024-11-28 00:10:01.546714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.026 [2024-11-28 00:10:01.546787] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:47.026 [2024-11-28 00:10:01.546795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:47.026 [2024-11-28 00:10:01.546802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:47.026 [2024-11-28 00:10:01.546823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:47.026 [2024-11-28 00:10:01.546842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.026 [2024-11-28 00:10:01.546855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:47.026 [2024-11-28 00:10:01.546869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:47.026 [2024-11-28 00:10:01.546877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.026 [2024-11-28 00:10:01.546885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:47.026 [2024-11-28 00:10:01.546893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:47.026 [2024-11-28 00:10:01.546901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:47.026 [2024-11-28 00:10:01.546918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:47.026 [2024-11-28 00:10:01.546925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:47.026 [2024-11-28 00:10:01.546940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:47.026 [2024-11-28 00:10:01.546947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:47.026 [2024-11-28 00:10:01.546963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:47.026 [2024-11-28 00:10:01.546985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:47.026 [2024-11-28 00:10:01.546992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.026 [2024-11-28 00:10:01.546999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:47.026 [2024-11-28 00:10:01.547006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:47.026 [2024-11-28 00:10:01.547013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.026 [2024-11-28 00:10:01.547024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:47.026 [2024-11-28 00:10:01.547032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:47.026 [2024-11-28 00:10:01.547040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.026 [2024-11-28 00:10:01.547047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:47.026 [2024-11-28 00:10:01.547054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:47.026 [2024-11-28 00:10:01.547061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.026 [2024-11-28 00:10:01.547068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:47.026 [2024-11-28 00:10:01.547076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:47.026 [2024-11-28 00:10:01.547083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.026 [2024-11-28 00:10:01.547089] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:47.027 [2024-11-28 00:10:01.547097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:47.027 [2024-11-28 00:10:01.547105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.027 [2024-11-28 00:10:01.547113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.027 [2024-11-28 00:10:01.547121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:47.027 [2024-11-28 00:10:01.547128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:47.027 [2024-11-28 00:10:01.547135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:47.027 [2024-11-28 00:10:01.547145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:47.027 [2024-11-28 00:10:01.547152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:47.027 [2024-11-28 00:10:01.547160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:47.027 [2024-11-28 00:10:01.547168] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:47.027 [2024-11-28 00:10:01.547178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.027 [2024-11-28 00:10:01.547189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:47.027 [2024-11-28 00:10:01.547197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:47.027 [2024-11-28 00:10:01.547206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:47.027 [2024-11-28 00:10:01.547214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:47.027 [2024-11-28 00:10:01.547222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:47.027 [2024-11-28 00:10:01.547230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:47.027 [2024-11-28 00:10:01.547238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:47.027 [2024-11-28 00:10:01.547246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:47.027 [2024-11-28 00:10:01.547254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:47.027 [2024-11-28 00:10:01.547261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:47.027 [2024-11-28 00:10:01.547269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:47.027 [2024-11-28 00:10:01.547279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:47.027 [2024-11-28 00:10:01.547288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:47.027 [2024-11-28 00:10:01.547295] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:47.027 [2024-11-28 00:10:01.547304] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.027 [2024-11-28 00:10:01.547313] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:47.027 [2024-11-28 00:10:01.547321] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:47.027 [2024-11-28 00:10:01.547329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:47.027 [2024-11-28 00:10:01.547336] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:47.027 [2024-11-28 00:10:01.547343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.547350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:47.027 [2024-11-28 00:10:01.547357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:15:47.027 [2024-11-28 00:10:01.547394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.553323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.553354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.027 [2024-11-28 00:10:01.553385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.887 ms 00:15:47.027 [2024-11-28 00:10:01.553396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.553503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.553538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:47.027 [2024-11-28 00:10:01.553547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:47.027 [2024-11-28 00:10:01.553553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.571016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.571056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.027 [2024-11-28 00:10:01.571070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:15:47.027 [2024-11-28 00:10:01.571080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.571151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.571166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.027 [2024-11-28 00:10:01.571175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.027 [2024-11-28 00:10:01.571182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.571522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.571543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.027 [2024-11-28 00:10:01.571552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:15:47.027 [2024-11-28 00:10:01.571563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.571682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.571696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.027 [2024-11-28 00:10:01.571705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:15:47.027 [2024-11-28 00:10:01.571717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.577165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.577317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.027 [2024-11-28 00:10:01.577335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:15:47.027 [2024-11-28 00:10:01.577343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.580039] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:47.027 [2024-11-28 00:10:01.580074] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:47.027 [2024-11-28 00:10:01.580086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.580094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:47.027 [2024-11-28 00:10:01.580109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:15:47.027 [2024-11-28 00:10:01.580117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.595222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.595335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:47.027 [2024-11-28 00:10:01.595403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.059 ms 00:15:47.027 [2024-11-28 00:10:01.595427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.599908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.600234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:47.027 [2024-11-28 00:10:01.600282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.987 ms 00:15:47.027 [2024-11-28 00:10:01.600307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.603923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.604002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:47.027 [2024-11-28 00:10:01.604028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:15:47.027 [2024-11-28 00:10:01.604048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.027 [2024-11-28 00:10:01.604652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.027 [2024-11-28 00:10:01.604716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:47.027 [2024-11-28 00:10:01.604754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:15:47.027 [2024-11-28 00:10:01.604778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.626243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.626285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:47.287 [2024-11-28 00:10:01.626296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.369 ms 00:15:47.287 [2024-11-28 00:10:01.626303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.633650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:47.287 [2024-11-28 00:10:01.647499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.647533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:47.287 [2024-11-28 00:10:01.647544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.098 ms 00:15:47.287 [2024-11-28 00:10:01.647551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.647624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.647636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:47.287 [2024-11-28 00:10:01.647644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.287 [2024-11-28 00:10:01.647651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.647695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.647703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.287 [2024-11-28 00:10:01.647711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:47.287 [2024-11-28 00:10:01.647718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.648933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.648960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:47.287 [2024-11-28 00:10:01.648971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:15:47.287 [2024-11-28 00:10:01.648978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.649007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.649015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.287 [2024-11-28 00:10:01.649026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.287 [2024-11-28 00:10:01.649036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.649067] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:47.287 [2024-11-28 00:10:01.649081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.649088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:47.287 [2024-11-28 00:10:01.649116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:47.287 [2024-11-28 00:10:01.649125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.653190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.653223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.287 [2024-11-28 00:10:01.653232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:15:47.287 [2024-11-28 00:10:01.653240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.653314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.287 [2024-11-28 00:10:01.653323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.287 [2024-11-28 00:10:01.653331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:47.287 [2024-11-28 00:10:01.653338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.287 [2024-11-28 00:10:01.654444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.287 [2024-11-28 00:10:01.655439] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.472 ms, result 0 00:15:47.287 [2024-11-28 00:10:01.656615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.287 [2024-11-28 00:10:01.664610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.548  [2024-11-28T00:10:02.150Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-11-28 00:10:01.936842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.548 [2024-11-28 00:10:01.937817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.937858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:47.548 [2024-11-28 00:10:01.937872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:47.548 [2024-11-28 00:10:01.937880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.937912] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:47.548 [2024-11-28 00:10:01.938320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.938333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:47.548 [2024-11-28 00:10:01.938342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:15:47.548 [2024-11-28 00:10:01.938349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.940061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.940092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:47.548 [2024-11-28 00:10:01.940101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:15:47.548 [2024-11-28 00:10:01.940108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.944305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.944330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:47.548 [2024-11-28 00:10:01.944339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.177 ms 00:15:47.548 [2024-11-28 00:10:01.944346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.951203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.951231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:47.548 [2024-11-28 00:10:01.951240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:15:47.548 [2024-11-28 00:10:01.951248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.953223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.953252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:47.548 [2024-11-28 00:10:01.953261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:15:47.548 [2024-11-28 00:10:01.953268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.956639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.956769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:47.548 [2024-11-28 00:10:01.956783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:15:47.548 [2024-11-28 00:10:01.956799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.956962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.956972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:47.548 [2024-11-28 00:10:01.956980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:15:47.548 [2024-11-28 00:10:01.956987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.959561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.959590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:47.548 [2024-11-28 00:10:01.959599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:15:47.548 [2024-11-28 00:10:01.959606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.961932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.961964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:47.548 [2024-11-28 00:10:01.961972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:15:47.548 [2024-11-28 00:10:01.961978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.963626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.963654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:47.548 [2024-11-28 00:10:01.963662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:15:47.548 [2024-11-28 00:10:01.963669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.965255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.548 [2024-11-28 00:10:01.965285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:47.548 [2024-11-28 00:10:01.965293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:15:47.548 [2024-11-28 00:10:01.965300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.548 [2024-11-28 00:10:01.965328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:47.548 [2024-11-28 00:10:01.965341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:47.548 [2024-11-28 00:10:01.965599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.965998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:47.549 [2024-11-28 00:10:01.966086] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:47.549 [2024-11-28 00:10:01.966094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:47.549 [2024-11-28 00:10:01.966101] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:47.549 [2024-11-28 00:10:01.966108] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:47.549 [2024-11-28 00:10:01.966115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:47.549 [2024-11-28 00:10:01.966123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:47.549 [2024-11-28 00:10:01.966136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:47.549 [2024-11-28 00:10:01.966143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:47.549 [2024-11-28 00:10:01.966154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:47.549 [2024-11-28 00:10:01.966160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:47.549 [2024-11-28 00:10:01.966166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:47.549 [2024-11-28 00:10:01.966172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.549 [2024-11-28 00:10:01.966181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:47.549 [2024-11-28 00:10:01.966189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:15:47.549 [2024-11-28 00:10:01.966195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.549 [2024-11-28 00:10:01.967544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.549 [2024-11-28 00:10:01.967569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:47.549 [2024-11-28 00:10:01.967577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:15:47.549 [2024-11-28 00:10:01.967584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.549 [2024-11-28 00:10:01.967645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.549 [2024-11-28 00:10:01.967653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:47.549 [2024-11-28 00:10:01.967661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:47.549 [2024-11-28 00:10:01.967668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.549 [2024-11-28 00:10:01.972592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.549 [2024-11-28 00:10:01.972622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.549 [2024-11-28 00:10:01.972631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.549 [2024-11-28 00:10:01.972638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.549 [2024-11-28 00:10:01.972717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.972726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.550 [2024-11-28 00:10:01.972734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.972740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.972779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.972787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.550 [2024-11-28 00:10:01.972797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.972805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.972820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.972830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.550 [2024-11-28 00:10:01.972837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.972844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.981092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.981134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.550 [2024-11-28 00:10:01.981144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.981151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.984705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.984739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.550 [2024-11-28 00:10:01.984748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.984755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.984793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.984801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.550 [2024-11-28 00:10:01.984809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.984816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.984843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.984851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.550 [2024-11-28 00:10:01.984861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.984868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.984928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.984942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.550 [2024-11-28 00:10:01.984950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.984957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.984985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.984993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:47.550 [2024-11-28 00:10:01.985000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.985010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.985045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.985054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.550 [2024-11-28 00:10:01.985064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.985074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.985136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:47.550 [2024-11-28 00:10:01.985146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.550 [2024-11-28 00:10:01.985157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:47.550 [2024-11-28 00:10:01.985164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.550 [2024-11-28 00:10:01.985295] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.465 ms, result 0 00:15:47.809 00:15:47.809 00:15:47.809 00:10:02 -- ftl/trim.sh@93 -- # svcpid=83585 00:15:47.809 00:10:02 -- ftl/trim.sh@94 -- # waitforlisten 83585 00:15:47.809 00:10:02 -- common/autotest_common.sh@829 -- # '[' -z 83585 ']' 00:15:47.809 00:10:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.809 00:10:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.809 00:10:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.809 00:10:02 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:47.809 00:10:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.809 00:10:02 -- common/autotest_common.sh@10 -- # set +x 00:15:47.809 [2024-11-28 00:10:02.249717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.809 [2024-11-28 00:10:02.249981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83585 ] 00:15:47.809 [2024-11-28 00:10:02.396156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.067 [2024-11-28 00:10:02.427166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.067 [2024-11-28 00:10:02.427529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.634 00:10:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.634 00:10:03 -- common/autotest_common.sh@862 -- # return 0 00:15:48.634 00:10:03 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:48.895 [2024-11-28 00:10:03.244647] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:48.895 [2024-11-28 00:10:03.244704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:48.895 [2024-11-28 00:10:03.406087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.406255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:48.895 [2024-11-28 00:10:03.406278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:48.895 [2024-11-28 00:10:03.406287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.408453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.408481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.895 [2024-11-28 00:10:03.408492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:15:48.895 [2024-11-28 00:10:03.408500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.408649] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:48.895 [2024-11-28 00:10:03.408870] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:48.895 [2024-11-28 00:10:03.408885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.408892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.895 [2024-11-28 00:10:03.408902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:15:48.895 [2024-11-28 00:10:03.408912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.410086] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:48.895 [2024-11-28 00:10:03.412297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.412425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:48.895 [2024-11-28 00:10:03.412491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:15:48.895 [2024-11-28 00:10:03.412517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.412608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.412644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:48.895 [2024-11-28 00:10:03.412751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:48.895 [2024-11-28 00:10:03.412779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.417261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.417377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.895 [2024-11-28 00:10:03.417434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.418 ms 00:15:48.895 [2024-11-28 00:10:03.417460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.417640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.417719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.895 [2024-11-28 00:10:03.417769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:15:48.895 [2024-11-28 00:10:03.417822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.417867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.417919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:48.895 [2024-11-28 00:10:03.417976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:48.895 [2024-11-28 00:10:03.418001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.418065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:48.895 [2024-11-28 00:10:03.419320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.419341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.895 [2024-11-28 00:10:03.419352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:15:48.895 [2024-11-28 00:10:03.419359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.419406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.419413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:48.895 [2024-11-28 00:10:03.419423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:48.895 [2024-11-28 00:10:03.419433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.419459] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:48.895 [2024-11-28 00:10:03.419475] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:48.895 [2024-11-28 00:10:03.419514] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:48.895 [2024-11-28 00:10:03.419531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:48.895 [2024-11-28 00:10:03.419604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:48.895 [2024-11-28 00:10:03.419621] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:48.895 [2024-11-28 00:10:03.419635] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:48.895 [2024-11-28 00:10:03.419645] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:48.895 [2024-11-28 00:10:03.419656] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:48.895 [2024-11-28 00:10:03.419664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:48.895 [2024-11-28 00:10:03.419674] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:48.895 [2024-11-28 00:10:03.419681] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:48.895 [2024-11-28 00:10:03.419689] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:48.895 [2024-11-28 00:10:03.419696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.419705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:48.895 [2024-11-28 00:10:03.419712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:15:48.895 [2024-11-28 00:10:03.419721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.419784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.895 [2024-11-28 00:10:03.419793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:48.895 [2024-11-28 00:10:03.419801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:48.895 [2024-11-28 00:10:03.419813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.895 [2024-11-28 00:10:03.419887] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:48.895 [2024-11-28 00:10:03.419897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:48.895 [2024-11-28 00:10:03.419904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.895 [2024-11-28 00:10:03.419919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.895 [2024-11-28 00:10:03.419930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:48.895 [2024-11-28 00:10:03.419938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:48.895 [2024-11-28 00:10:03.419944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:48.895 [2024-11-28 00:10:03.419953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:48.895 [2024-11-28 00:10:03.419959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:48.895 [2024-11-28 00:10:03.419968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.895 [2024-11-28 00:10:03.419976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:48.896 [2024-11-28 00:10:03.419985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:48.896 [2024-11-28 00:10:03.419992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.896 [2024-11-28 00:10:03.420001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:48.896 [2024-11-28 00:10:03.420008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:48.896 [2024-11-28 00:10:03.420017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:48.896 [2024-11-28 00:10:03.420033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:48.896 [2024-11-28 00:10:03.420040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:48.896 [2024-11-28 00:10:03.420058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:48.896 [2024-11-28 00:10:03.420068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:48.896 [2024-11-28 00:10:03.420090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:48.896 [2024-11-28 00:10:03.420113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:48.896 [2024-11-28 00:10:03.420138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:48.896 [2024-11-28 00:10:03.420161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420177] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:48.896 [2024-11-28 00:10:03.420187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.896 [2024-11-28 00:10:03.420204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:48.896 [2024-11-28 00:10:03.420211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:48.896 [2024-11-28 00:10:03.420220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.896 [2024-11-28 00:10:03.420226] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:48.896 [2024-11-28 00:10:03.420236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:48.896 [2024-11-28 00:10:03.420244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.896 [2024-11-28 00:10:03.420264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:48.896 [2024-11-28 00:10:03.420275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:48.896 [2024-11-28 00:10:03.420284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:48.896 [2024-11-28 00:10:03.420293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:48.896 [2024-11-28 00:10:03.420300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:48.896 [2024-11-28 00:10:03.420310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:48.896 [2024-11-28 00:10:03.420318] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:48.896 [2024-11-28 00:10:03.420333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.896 [2024-11-28 00:10:03.420342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:48.896 [2024-11-28 00:10:03.420351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:48.896 [2024-11-28 00:10:03.420358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:48.896 [2024-11-28 00:10:03.420378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:48.896 [2024-11-28 00:10:03.420386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:48.896 [2024-11-28 00:10:03.420394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:48.896 [2024-11-28 00:10:03.420401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:48.896 [2024-11-28 00:10:03.420409] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:48.896 [2024-11-28 00:10:03.420416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:48.896 [2024-11-28 00:10:03.420424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:48.896 [2024-11-28 00:10:03.420431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:48.896 [2024-11-28 00:10:03.420439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:48.896 [2024-11-28 00:10:03.420447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:48.896 [2024-11-28 00:10:03.420456] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:48.896 [2024-11-28 00:10:03.420464] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.896 [2024-11-28 00:10:03.420475] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:48.896 [2024-11-28 00:10:03.420482] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:48.896 [2024-11-28 00:10:03.420492] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:48.896 [2024-11-28 00:10:03.420499] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:48.896 [2024-11-28 00:10:03.420507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.420514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:48.896 [2024-11-28 00:10:03.420523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:15:48.896 [2024-11-28 00:10:03.420531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.426262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.426292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.896 [2024-11-28 00:10:03.426309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.677 ms 00:15:48.896 [2024-11-28 00:10:03.426317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.426436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.426449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:48.896 [2024-11-28 00:10:03.426459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:48.896 [2024-11-28 00:10:03.426467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.435150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.435180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.896 [2024-11-28 00:10:03.435191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.661 ms 00:15:48.896 [2024-11-28 00:10:03.435199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.435244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.435252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.896 [2024-11-28 00:10:03.435262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:48.896 [2024-11-28 00:10:03.435273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.435588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.435607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.896 [2024-11-28 00:10:03.435619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:15:48.896 [2024-11-28 00:10:03.435626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.435738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.435749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.896 [2024-11-28 00:10:03.435759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:15:48.896 [2024-11-28 00:10:03.435766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.440819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.440936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.896 [2024-11-28 00:10:03.440955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:15:48.896 [2024-11-28 00:10:03.440964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.443169] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:15:48.896 [2024-11-28 00:10:03.443194] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:48.896 [2024-11-28 00:10:03.443205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.443217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:48.896 [2024-11-28 00:10:03.443227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.153 ms 00:15:48.896 [2024-11-28 00:10:03.443234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.896 [2024-11-28 00:10:03.457623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.896 [2024-11-28 00:10:03.457654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:48.897 [2024-11-28 00:10:03.457671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.347 ms 00:15:48.897 [2024-11-28 00:10:03.457681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.897 [2024-11-28 00:10:03.459337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.897 [2024-11-28 00:10:03.459379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:48.897 [2024-11-28 00:10:03.459393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:15:48.897 [2024-11-28 00:10:03.459399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.897 [2024-11-28 00:10:03.460829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.897 [2024-11-28 00:10:03.460858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:48.897 [2024-11-28 00:10:03.460868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:15:48.897 [2024-11-28 00:10:03.460875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.897 [2024-11-28 00:10:03.461068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.897 [2024-11-28 00:10:03.461081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:48.897 [2024-11-28 00:10:03.461091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:48.897 [2024-11-28 00:10:03.461098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.897 [2024-11-28 00:10:03.477829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.897 [2024-11-28 00:10:03.477877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:48.897 [2024-11-28 00:10:03.477890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.700 ms 00:15:48.897 [2024-11-28 00:10:03.477900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.897 [2024-11-28 00:10:03.485211] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:49.156 [2024-11-28 00:10:03.498475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.498515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:49.156 [2024-11-28 00:10:03.498525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.501 ms 00:15:49.156 [2024-11-28 00:10:03.498536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.498597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.498607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:49.156 [2024-11-28 00:10:03.498616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:49.156 [2024-11-28 00:10:03.498625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.498670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.498680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:49.156 [2024-11-28 00:10:03.498687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:49.156 [2024-11-28 00:10:03.498696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.499834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.499954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:49.156 [2024-11-28 00:10:03.499973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:15:49.156 [2024-11-28 00:10:03.499982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.500013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.500026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:49.156 [2024-11-28 00:10:03.500035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:49.156 [2024-11-28 00:10:03.500044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.500074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:49.156 [2024-11-28 00:10:03.500084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.500096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:49.156 [2024-11-28 00:10:03.500105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:49.156 [2024-11-28 00:10:03.500112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.503651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.503746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:49.156 [2024-11-28 00:10:03.503804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.514 ms 00:15:49.156 [2024-11-28 00:10:03.503827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.503965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.503993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:49.156 [2024-11-28 00:10:03.504100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:49.156 [2024-11-28 00:10:03.504122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.504876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:49.156 [2024-11-28 00:10:03.505950] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.551 ms, result 0 00:15:49.156 [2024-11-28 00:10:03.507032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:49.156 Some configs were skipped because the RPC state that can call them passed over. 00:15:49.156 00:10:03 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:49.156 [2024-11-28 00:10:03.719290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.156 [2024-11-28 00:10:03.719478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:49.156 [2024-11-28 00:10:03.719534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:15:49.156 [2024-11-28 00:10:03.719609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.156 [2024-11-28 00:10:03.719669] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.676 ms, result 0 00:15:49.156 true 00:15:49.156 00:10:03 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:49.415 [2024-11-28 00:10:03.913460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.415 [2024-11-28 00:10:03.913506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:49.415 [2024-11-28 00:10:03.913520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.600 ms 00:15:49.415 [2024-11-28 00:10:03.913528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.415 [2024-11-28 00:10:03.913565] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.706 ms, result 0 00:15:49.415 true 00:15:49.415 00:10:03 -- ftl/trim.sh@102 -- # killprocess 83585 00:15:49.415 00:10:03 -- common/autotest_common.sh@936 -- # '[' -z 83585 ']' 00:15:49.415 00:10:03 -- common/autotest_common.sh@940 -- # kill -0 83585 00:15:49.415 00:10:03 -- common/autotest_common.sh@941 -- # uname 00:15:49.415 00:10:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.415 00:10:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83585 00:15:49.415 killing process with pid 83585 00:15:49.415 00:10:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.415 00:10:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.415 00:10:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83585' 00:15:49.415 00:10:03 -- common/autotest_common.sh@955 -- # kill 83585 00:15:49.415 00:10:03 -- common/autotest_common.sh@960 -- # wait 83585 00:15:49.674 [2024-11-28 00:10:04.038724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.674 [2024-11-28 00:10:04.038780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:49.674 [2024-11-28 00:10:04.038793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:49.674 [2024-11-28 00:10:04.038804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.674 [2024-11-28 00:10:04.038826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:49.674 [2024-11-28 00:10:04.039246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.674 [2024-11-28 00:10:04.039266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:49.674 [2024-11-28 00:10:04.039276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:15:49.674 [2024-11-28 00:10:04.039283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.674 [2024-11-28 00:10:04.039584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.674 [2024-11-28 00:10:04.039594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:49.674 [2024-11-28 00:10:04.039607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:15:49.674 [2024-11-28 00:10:04.039615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.043770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.043906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:49.675 [2024-11-28 00:10:04.043924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.133 ms 00:15:49.675 [2024-11-28 00:10:04.043932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.051007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.051115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:49.675 [2024-11-28 00:10:04.051135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.036 ms 00:15:49.675 [2024-11-28 00:10:04.051145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.052459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.052485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:49.675 [2024-11-28 00:10:04.052495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:15:49.675 [2024-11-28 00:10:04.052502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.056276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.056308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:49.675 [2024-11-28 00:10:04.056320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:15:49.675 [2024-11-28 00:10:04.056327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.056460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.056469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:49.675 [2024-11-28 00:10:04.056481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:49.675 [2024-11-28 00:10:04.056488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.058408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.058437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:49.675 [2024-11-28 00:10:04.058450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.898 ms 00:15:49.675 [2024-11-28 00:10:04.058457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.059862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.059892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:49.675 [2024-11-28 00:10:04.059904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:15:49.675 [2024-11-28 00:10:04.059911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.060876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.060905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:49.675 [2024-11-28 00:10:04.060916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:15:49.675 [2024-11-28 00:10:04.060922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.062000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.675 [2024-11-28 00:10:04.062102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:49.675 [2024-11-28 00:10:04.062119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:15:49.675 [2024-11-28 00:10:04.062126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.675 [2024-11-28 00:10:04.062157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:49.675 [2024-11-28 00:10:04.062170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:49.675 [2024-11-28 00:10:04.062658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.062995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.063005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:49.676 [2024-11-28 00:10:04.063020] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:49.676 [2024-11-28 00:10:04.063029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:49.676 [2024-11-28 00:10:04.063037] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:49.676 [2024-11-28 00:10:04.063045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:49.676 [2024-11-28 00:10:04.063052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:49.676 [2024-11-28 00:10:04.063061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:49.676 [2024-11-28 00:10:04.063071] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:49.676 [2024-11-28 00:10:04.063082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:49.676 [2024-11-28 00:10:04.063088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:49.676 [2024-11-28 00:10:04.063100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:49.676 [2024-11-28 00:10:04.063106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:49.676 [2024-11-28 00:10:04.063114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.676 [2024-11-28 00:10:04.063121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:49.676 [2024-11-28 00:10:04.063132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:15:49.676 [2024-11-28 00:10:04.063139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.064458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.676 [2024-11-28 00:10:04.064473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:49.676 [2024-11-28 00:10:04.064483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:15:49.676 [2024-11-28 00:10:04.064492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.064554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.676 [2024-11-28 00:10:04.064562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:49.676 [2024-11-28 00:10:04.064571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:49.676 [2024-11-28 00:10:04.064578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.069786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.069894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:49.676 [2024-11-28 00:10:04.069949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.069974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.070048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.070115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:49.676 [2024-11-28 00:10:04.070142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.070160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.070214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.070329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:49.676 [2024-11-28 00:10:04.070355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.070394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.070431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.070532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:49.676 [2024-11-28 00:10:04.070562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.070581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.079554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.079713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.676 [2024-11-28 00:10:04.079771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.079822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.083452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.083570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.676 [2024-11-28 00:10:04.083623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.083645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.083714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.083738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.676 [2024-11-28 00:10:04.083783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.083844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.083904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.083993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.676 [2024-11-28 00:10:04.084018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.084036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.676 [2024-11-28 00:10:04.084116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.676 [2024-11-28 00:10:04.084139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.676 [2024-11-28 00:10:04.084160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.676 [2024-11-28 00:10:04.084178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.677 [2024-11-28 00:10:04.084264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.677 [2024-11-28 00:10:04.084291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:49.677 [2024-11-28 00:10:04.084314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.677 [2024-11-28 00:10:04.084390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.677 [2024-11-28 00:10:04.084445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.677 [2024-11-28 00:10:04.084467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.677 [2024-11-28 00:10:04.084520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.677 [2024-11-28 00:10:04.084560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.677 [2024-11-28 00:10:04.084617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.677 [2024-11-28 00:10:04.084640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.677 [2024-11-28 00:10:04.084660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.677 [2024-11-28 00:10:04.084678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.677 [2024-11-28 00:10:04.084809] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.071 ms, result 0 00:15:49.677 00:10:04 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:49.935 [2024-11-28 00:10:04.309322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:49.935 [2024-11-28 00:10:04.309623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83621 ] 00:15:49.935 [2024-11-28 00:10:04.455114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.935 [2024-11-28 00:10:04.485254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.194 [2024-11-28 00:10:04.567526] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:50.194 [2024-11-28 00:10:04.567741] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:50.194 [2024-11-28 00:10:04.713071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.194 [2024-11-28 00:10:04.713241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:50.194 [2024-11-28 00:10:04.713300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:50.194 [2024-11-28 00:10:04.713327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.194 [2024-11-28 00:10:04.715522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.194 [2024-11-28 00:10:04.715632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:50.194 [2024-11-28 00:10:04.715688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:15:50.194 [2024-11-28 00:10:04.715710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.194 [2024-11-28 00:10:04.715869] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:50.194 [2024-11-28 00:10:04.716143] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:50.194 [2024-11-28 00:10:04.716195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.194 [2024-11-28 00:10:04.716252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:50.195 [2024-11-28 00:10:04.716280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:15:50.195 [2024-11-28 00:10:04.716302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.717493] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:50.195 [2024-11-28 00:10:04.719591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.719697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:50.195 [2024-11-28 00:10:04.719752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:15:50.195 [2024-11-28 00:10:04.719774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.719841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.719853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:50.195 [2024-11-28 00:10:04.719862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:50.195 [2024-11-28 00:10:04.719871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.724439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.724471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:50.195 [2024-11-28 00:10:04.724480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:15:50.195 [2024-11-28 00:10:04.724490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.724570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.724580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:50.195 [2024-11-28 00:10:04.724590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:50.195 [2024-11-28 00:10:04.724597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.724621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.724629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:50.195 [2024-11-28 00:10:04.724637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:50.195 [2024-11-28 00:10:04.724647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.724670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:50.195 [2024-11-28 00:10:04.725936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.726044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:50.195 [2024-11-28 00:10:04.726057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:15:50.195 [2024-11-28 00:10:04.726068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.726105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.726116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:50.195 [2024-11-28 00:10:04.726124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:50.195 [2024-11-28 00:10:04.726131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.726152] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:50.195 [2024-11-28 00:10:04.726168] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:50.195 [2024-11-28 00:10:04.726201] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:50.195 [2024-11-28 00:10:04.726215] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:50.195 [2024-11-28 00:10:04.726287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:50.195 [2024-11-28 00:10:04.726299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:50.195 [2024-11-28 00:10:04.726309] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:50.195 [2024-11-28 00:10:04.726318] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726327] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726334] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:50.195 [2024-11-28 00:10:04.726342] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:50.195 [2024-11-28 00:10:04.726350] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:50.195 [2024-11-28 00:10:04.726357] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:50.195 [2024-11-28 00:10:04.726384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.726391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:50.195 [2024-11-28 00:10:04.726399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:15:50.195 [2024-11-28 00:10:04.726405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.726470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.195 [2024-11-28 00:10:04.726478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:50.195 [2024-11-28 00:10:04.726486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:50.195 [2024-11-28 00:10:04.726492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.195 [2024-11-28 00:10:04.726572] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:50.195 [2024-11-28 00:10:04.726582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:50.195 [2024-11-28 00:10:04.726589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:50.195 [2024-11-28 00:10:04.726611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:50.195 [2024-11-28 00:10:04.726632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:50.195 [2024-11-28 00:10:04.726647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:50.195 [2024-11-28 00:10:04.726659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:50.195 [2024-11-28 00:10:04.726667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:50.195 [2024-11-28 00:10:04.726676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:50.195 [2024-11-28 00:10:04.726684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:50.195 [2024-11-28 00:10:04.726692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:50.195 [2024-11-28 00:10:04.726709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:50.195 [2024-11-28 00:10:04.726716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:50.195 [2024-11-28 00:10:04.726730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:50.195 [2024-11-28 00:10:04.726737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:50.195 [2024-11-28 00:10:04.726752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:50.195 [2024-11-28 00:10:04.726774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:50.195 [2024-11-28 00:10:04.726796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:50.195 [2024-11-28 00:10:04.726821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:50.195 [2024-11-28 00:10:04.726843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:50.195 [2024-11-28 00:10:04.726857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:50.195 [2024-11-28 00:10:04.726864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:50.195 [2024-11-28 00:10:04.726871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:50.195 [2024-11-28 00:10:04.726878] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:50.195 [2024-11-28 00:10:04.726886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:50.195 [2024-11-28 00:10:04.726897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:50.195 [2024-11-28 00:10:04.726905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:50.195 [2024-11-28 00:10:04.726913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:50.195 [2024-11-28 00:10:04.726921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:50.195 [2024-11-28 00:10:04.726929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:50.195 [2024-11-28 00:10:04.726936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:50.195 [2024-11-28 00:10:04.726945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:50.196 [2024-11-28 00:10:04.726953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:50.196 [2024-11-28 00:10:04.726962] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:50.196 [2024-11-28 00:10:04.726972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:50.196 [2024-11-28 00:10:04.726983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:50.196 [2024-11-28 00:10:04.726991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:50.196 [2024-11-28 00:10:04.726999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:50.196 [2024-11-28 00:10:04.727008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:50.196 [2024-11-28 00:10:04.727015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:50.196 [2024-11-28 00:10:04.727023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:50.196 [2024-11-28 00:10:04.727030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:50.196 [2024-11-28 00:10:04.727036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:50.196 [2024-11-28 00:10:04.727043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:50.196 [2024-11-28 00:10:04.727050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:50.196 [2024-11-28 00:10:04.727057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:50.196 [2024-11-28 00:10:04.727064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:50.196 [2024-11-28 00:10:04.727073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:50.196 [2024-11-28 00:10:04.727080] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:50.196 [2024-11-28 00:10:04.727088] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:50.196 [2024-11-28 00:10:04.727095] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:50.196 [2024-11-28 00:10:04.727102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:50.196 [2024-11-28 00:10:04.727109] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:50.196 [2024-11-28 00:10:04.727115] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:50.196 [2024-11-28 00:10:04.727123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.727130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:50.196 [2024-11-28 00:10:04.727136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:15:50.196 [2024-11-28 00:10:04.727146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.732814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.732923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:50.196 [2024-11-28 00:10:04.732936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.628 ms 00:15:50.196 [2024-11-28 00:10:04.732948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.733059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.733073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:50.196 [2024-11-28 00:10:04.733086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:50.196 [2024-11-28 00:10:04.733093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.753385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.753444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:50.196 [2024-11-28 00:10:04.753468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.249 ms 00:15:50.196 [2024-11-28 00:10:04.753486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.753594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.753613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:50.196 [2024-11-28 00:10:04.753628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:50.196 [2024-11-28 00:10:04.753642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.754018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.754060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:50.196 [2024-11-28 00:10:04.754077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:15:50.196 [2024-11-28 00:10:04.754096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.754296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.754330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:50.196 [2024-11-28 00:10:04.754346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:15:50.196 [2024-11-28 00:10:04.754386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.760271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.760305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:50.196 [2024-11-28 00:10:04.760315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.849 ms 00:15:50.196 [2024-11-28 00:10:04.760322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.762586] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:15:50.196 [2024-11-28 00:10:04.762619] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:50.196 [2024-11-28 00:10:04.762629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.762636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:50.196 [2024-11-28 00:10:04.762649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:15:50.196 [2024-11-28 00:10:04.762656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.777135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.777261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:50.196 [2024-11-28 00:10:04.777277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.437 ms 00:15:50.196 [2024-11-28 00:10:04.777291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.778864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.778894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:50.196 [2024-11-28 00:10:04.778903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:15:50.196 [2024-11-28 00:10:04.778910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.780237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.780268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:50.196 [2024-11-28 00:10:04.780277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:15:50.196 [2024-11-28 00:10:04.780283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.196 [2024-11-28 00:10:04.780563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.196 [2024-11-28 00:10:04.780632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:50.196 [2024-11-28 00:10:04.780682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:15:50.196 [2024-11-28 00:10:04.780692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.455 [2024-11-28 00:10:04.798037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.455 [2024-11-28 00:10:04.798087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:50.456 [2024-11-28 00:10:04.798099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.321 ms 00:15:50.456 [2024-11-28 00:10:04.798107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.805473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:50.456 [2024-11-28 00:10:04.818953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.818991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:50.456 [2024-11-28 00:10:04.819002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.764 ms 00:15:50.456 [2024-11-28 00:10:04.819009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.819075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.819087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:50.456 [2024-11-28 00:10:04.819096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:50.456 [2024-11-28 00:10:04.819102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.819145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.819153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:50.456 [2024-11-28 00:10:04.819160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:50.456 [2024-11-28 00:10:04.819167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.820391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.820420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:50.456 [2024-11-28 00:10:04.820431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:15:50.456 [2024-11-28 00:10:04.820438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.820471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.820482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:50.456 [2024-11-28 00:10:04.820492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:50.456 [2024-11-28 00:10:04.820499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.820529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:50.456 [2024-11-28 00:10:04.820538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.820545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:50.456 [2024-11-28 00:10:04.820553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:50.456 [2024-11-28 00:10:04.820562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.823877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.823910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:50.456 [2024-11-28 00:10:04.823919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:15:50.456 [2024-11-28 00:10:04.823926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.823993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.456 [2024-11-28 00:10:04.824003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:50.456 [2024-11-28 00:10:04.824011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:50.456 [2024-11-28 00:10:04.824018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.456 [2024-11-28 00:10:04.825120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:50.456 [2024-11-28 00:10:04.826122] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.785 ms, result 0 00:15:50.456 [2024-11-28 00:10:04.826804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:50.456 [2024-11-28 00:10:04.836263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:51.391  [2024-11-28T00:10:06.928Z] Copying: 46/256 [MB] (46 MBps) [2024-11-28T00:10:08.305Z] Copying: 72/256 [MB] (26 MBps) [2024-11-28T00:10:09.240Z] Copying: 116/256 [MB] (43 MBps) [2024-11-28T00:10:10.176Z] Copying: 160/256 [MB] (44 MBps) [2024-11-28T00:10:11.111Z] Copying: 205/256 [MB] (44 MBps) [2024-11-28T00:10:11.111Z] Copying: 249/256 [MB] (43 MBps) [2024-11-28T00:10:11.679Z] Copying: 256/256 [MB] (average 41 MBps)[2024-11-28 00:10:11.372030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:57.078 [2024-11-28 00:10:11.373983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.374317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:57.078 [2024-11-28 00:10:11.374398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:57.078 [2024-11-28 00:10:11.374420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.374485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:57.078 [2024-11-28 00:10:11.375146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.375192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:57.078 [2024-11-28 00:10:11.375214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:15:57.078 [2024-11-28 00:10:11.375236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.376022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.376074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:57.078 [2024-11-28 00:10:11.376097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:15:57.078 [2024-11-28 00:10:11.376116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.383178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.383198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:57.078 [2024-11-28 00:10:11.383208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.014 ms 00:15:57.078 [2024-11-28 00:10:11.383216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.390102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.390138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:57.078 [2024-11-28 00:10:11.390150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:15:57.078 [2024-11-28 00:10:11.390157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.391648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.391679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:57.078 [2024-11-28 00:10:11.391688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:15:57.078 [2024-11-28 00:10:11.391695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.394259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.394300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:57.078 [2024-11-28 00:10:11.394309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:15:57.078 [2024-11-28 00:10:11.394322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.394469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.394479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:57.078 [2024-11-28 00:10:11.394488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:57.078 [2024-11-28 00:10:11.394499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.396255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.396285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:57.078 [2024-11-28 00:10:11.396293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:15:57.078 [2024-11-28 00:10:11.396299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.397810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.397838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:57.078 [2024-11-28 00:10:11.397847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:15:57.078 [2024-11-28 00:10:11.397853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.398879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.398911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:57.078 [2024-11-28 00:10:11.398919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:15:57.078 [2024-11-28 00:10:11.398926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.399723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.078 [2024-11-28 00:10:11.399841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:57.078 [2024-11-28 00:10:11.399855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:15:57.078 [2024-11-28 00:10:11.399862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.078 [2024-11-28 00:10:11.399882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:57.078 [2024-11-28 00:10:11.399894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.399999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:57.078 [2024-11-28 00:10:11.400242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:57.079 [2024-11-28 00:10:11.400656] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:57.079 [2024-11-28 00:10:11.400664] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22b2d521-b6b4-4305-b48e-c46298f0d7de 00:15:57.079 [2024-11-28 00:10:11.400671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:57.079 [2024-11-28 00:10:11.400678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:57.079 [2024-11-28 00:10:11.400690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:57.079 [2024-11-28 00:10:11.400698] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:57.079 [2024-11-28 00:10:11.400710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:57.079 [2024-11-28 00:10:11.400718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:57.079 [2024-11-28 00:10:11.400728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:57.079 [2024-11-28 00:10:11.400734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:57.079 [2024-11-28 00:10:11.400741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:57.079 [2024-11-28 00:10:11.400747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.079 [2024-11-28 00:10:11.400755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:57.079 [2024-11-28 00:10:11.400764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:15:57.079 [2024-11-28 00:10:11.400772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.402096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.079 [2024-11-28 00:10:11.402115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:57.079 [2024-11-28 00:10:11.402124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:15:57.079 [2024-11-28 00:10:11.402132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.402184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.079 [2024-11-28 00:10:11.402192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:57.079 [2024-11-28 00:10:11.402200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:57.079 [2024-11-28 00:10:11.402207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.407187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.407299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:57.079 [2024-11-28 00:10:11.407388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.407460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.407591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.407655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:57.079 [2024-11-28 00:10:11.407703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.407724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.407782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.407807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:57.079 [2024-11-28 00:10:11.407832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.407856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.407884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.407947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:57.079 [2024-11-28 00:10:11.407969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.407987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.416043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.416192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:57.079 [2024-11-28 00:10:11.416242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.416263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.079 [2024-11-28 00:10:11.419902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.079 [2024-11-28 00:10:11.420035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:57.079 [2024-11-28 00:10:11.420094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.079 [2024-11-28 00:10:11.420116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.420193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.420222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:57.080 [2024-11-28 00:10:11.420283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.420305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.420356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.420443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:57.080 [2024-11-28 00:10:11.420463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.420515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.420688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.420754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:57.080 [2024-11-28 00:10:11.420800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.420821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.420870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.420947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:57.080 [2024-11-28 00:10:11.420970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.420997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.421042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.421340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:57.080 [2024-11-28 00:10:11.421406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.421515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.421592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.080 [2024-11-28 00:10:11.421664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:57.080 [2024-11-28 00:10:11.421695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.080 [2024-11-28 00:10:11.421717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.080 [2024-11-28 00:10:11.421861] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.879 ms, result 0 00:15:57.080 00:15:57.080 00:15:57.080 00:10:11 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:15:57.647 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:15:57.647 00:10:12 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:57.647 00:10:12 -- ftl/trim.sh@109 -- # fio_kill 00:15:57.647 00:10:12 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:15:57.647 00:10:12 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.647 00:10:12 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:15:57.647 00:10:12 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:57.647 00:10:12 -- ftl/trim.sh@20 -- # killprocess 83585 00:15:57.647 00:10:12 -- common/autotest_common.sh@936 -- # '[' -z 83585 ']' 00:15:57.647 Process with pid 83585 is not found 00:15:57.647 00:10:12 -- common/autotest_common.sh@940 -- # kill -0 83585 00:15:57.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83585) - No such process 00:15:57.647 00:10:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83585 is not found' 00:15:57.906 ************************************ 00:15:57.906 END TEST ftl_trim 00:15:57.906 ************************************ 00:15:57.906 00:15:57.906 real 0m42.415s 00:15:57.906 user 1m5.492s 00:15:57.906 sys 0m4.526s 00:15:57.906 00:10:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:57.906 00:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.906 00:10:12 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:15:57.906 00:10:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:57.906 00:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.906 00:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.906 ************************************ 00:15:57.906 START TEST ftl_restore 00:15:57.906 ************************************ 00:15:57.906 00:10:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:15:57.906 * Looking for test storage... 00:15:57.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.906 00:10:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:57.906 00:10:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:57.906 00:10:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:57.906 00:10:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:57.906 00:10:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:57.906 00:10:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:57.906 00:10:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:57.906 00:10:12 -- scripts/common.sh@335 -- # IFS=.-: 00:15:57.906 00:10:12 -- scripts/common.sh@335 -- # read -ra ver1 00:15:57.906 00:10:12 -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.906 00:10:12 -- scripts/common.sh@336 -- # read -ra ver2 00:15:57.906 00:10:12 -- scripts/common.sh@337 -- # local 'op=<' 00:15:57.906 00:10:12 -- scripts/common.sh@339 -- # ver1_l=2 00:15:57.906 00:10:12 -- scripts/common.sh@340 -- # ver2_l=1 00:15:57.906 00:10:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:57.906 00:10:12 -- scripts/common.sh@343 -- # case "$op" in 00:15:57.906 00:10:12 -- scripts/common.sh@344 -- # : 1 00:15:57.906 00:10:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:57.906 00:10:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.906 00:10:12 -- scripts/common.sh@364 -- # decimal 1 00:15:57.906 00:10:12 -- scripts/common.sh@352 -- # local d=1 00:15:57.906 00:10:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.907 00:10:12 -- scripts/common.sh@354 -- # echo 1 00:15:57.907 00:10:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:57.907 00:10:12 -- scripts/common.sh@365 -- # decimal 2 00:15:57.907 00:10:12 -- scripts/common.sh@352 -- # local d=2 00:15:57.907 00:10:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.907 00:10:12 -- scripts/common.sh@354 -- # echo 2 00:15:57.907 00:10:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:57.907 00:10:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:57.907 00:10:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:57.907 00:10:12 -- scripts/common.sh@367 -- # return 0 00:15:57.907 00:10:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.907 00:10:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:57.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.907 --rc genhtml_branch_coverage=1 00:15:57.907 --rc genhtml_function_coverage=1 00:15:57.907 --rc genhtml_legend=1 00:15:57.907 --rc geninfo_all_blocks=1 00:15:57.907 --rc geninfo_unexecuted_blocks=1 00:15:57.907 00:15:57.907 ' 00:15:57.907 00:10:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:57.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.907 --rc genhtml_branch_coverage=1 00:15:57.907 --rc genhtml_function_coverage=1 00:15:57.907 --rc genhtml_legend=1 00:15:57.907 --rc geninfo_all_blocks=1 00:15:57.907 --rc geninfo_unexecuted_blocks=1 00:15:57.907 00:15:57.907 ' 00:15:57.907 00:10:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:57.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.907 --rc genhtml_branch_coverage=1 00:15:57.907 --rc genhtml_function_coverage=1 00:15:57.907 --rc genhtml_legend=1 00:15:57.907 --rc geninfo_all_blocks=1 00:15:57.907 --rc geninfo_unexecuted_blocks=1 00:15:57.907 00:15:57.907 ' 00:15:57.907 00:10:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:57.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.907 --rc genhtml_branch_coverage=1 00:15:57.907 --rc genhtml_function_coverage=1 00:15:57.907 --rc genhtml_legend=1 00:15:57.907 --rc geninfo_all_blocks=1 00:15:57.907 --rc geninfo_unexecuted_blocks=1 00:15:57.907 00:15:57.907 ' 00:15:57.907 00:10:12 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.907 00:10:12 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:15:57.907 00:10:12 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.907 00:10:12 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.907 00:10:12 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.907 00:10:12 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.907 00:10:12 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.907 00:10:12 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.907 00:10:12 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.907 00:10:12 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.907 00:10:12 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.907 00:10:12 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.907 00:10:12 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.907 00:10:12 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.907 00:10:12 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.907 00:10:12 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.907 00:10:12 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.907 00:10:12 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.907 00:10:12 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.907 00:10:12 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.907 00:10:12 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.907 00:10:12 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.907 00:10:12 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.907 00:10:12 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.907 00:10:12 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.907 00:10:12 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.907 00:10:12 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.907 00:10:12 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.907 00:10:12 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.907 00:10:12 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.907 00:10:12 -- ftl/restore.sh@13 -- # mktemp -d 00:15:57.907 00:10:12 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.F0oCTj5wGQ 00:15:57.907 00:10:12 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:15:57.907 00:10:12 -- ftl/restore.sh@16 -- # case $opt in 00:15:57.907 00:10:12 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:15:57.907 00:10:12 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:15:57.907 00:10:12 -- ftl/restore.sh@23 -- # shift 2 00:15:57.907 00:10:12 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:15:57.907 00:10:12 -- ftl/restore.sh@25 -- # timeout=240 00:15:57.907 00:10:12 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:15:57.907 00:10:12 -- ftl/restore.sh@39 -- # svcpid=83773 00:15:57.907 00:10:12 -- ftl/restore.sh@41 -- # waitforlisten 83773 00:15:57.907 00:10:12 -- common/autotest_common.sh@829 -- # '[' -z 83773 ']' 00:15:57.907 00:10:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.907 00:10:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.907 00:10:12 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.907 00:10:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.907 00:10:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.907 00:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.907 [2024-11-28 00:10:12.505195] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:57.907 [2024-11-28 00:10:12.505482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83773 ] 00:15:58.166 [2024-11-28 00:10:12.651656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.166 [2024-11-28 00:10:12.681577] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.166 [2024-11-28 00:10:12.681923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.733 00:10:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.733 00:10:13 -- common/autotest_common.sh@862 -- # return 0 00:15:58.733 00:10:13 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:58.733 00:10:13 -- ftl/common.sh@54 -- # local name=nvme0 00:15:58.733 00:10:13 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:58.733 00:10:13 -- ftl/common.sh@56 -- # local size=103424 00:15:58.733 00:10:13 -- ftl/common.sh@59 -- # local base_bdev 00:15:58.733 00:10:13 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:58.992 00:10:13 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:58.992 00:10:13 -- ftl/common.sh@62 -- # local base_size 00:15:58.992 00:10:13 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:58.992 00:10:13 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:58.992 00:10:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:58.992 00:10:13 -- common/autotest_common.sh@1369 -- # local bs 00:15:58.992 00:10:13 -- common/autotest_common.sh@1370 -- # local nb 00:15:58.992 00:10:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:59.250 00:10:13 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:59.250 { 00:15:59.250 "name": "nvme0n1", 00:15:59.250 "aliases": [ 00:15:59.250 "7bf4d847-2693-436d-9e41-f839c87d4c6c" 00:15:59.250 ], 00:15:59.250 "product_name": "NVMe disk", 00:15:59.250 "block_size": 4096, 00:15:59.250 "num_blocks": 1310720, 00:15:59.250 "uuid": "7bf4d847-2693-436d-9e41-f839c87d4c6c", 00:15:59.250 "assigned_rate_limits": { 00:15:59.250 "rw_ios_per_sec": 0, 00:15:59.250 "rw_mbytes_per_sec": 0, 00:15:59.250 "r_mbytes_per_sec": 0, 00:15:59.250 "w_mbytes_per_sec": 0 00:15:59.250 }, 00:15:59.250 "claimed": true, 00:15:59.250 "claim_type": "read_many_write_one", 00:15:59.250 "zoned": false, 00:15:59.250 "supported_io_types": { 00:15:59.250 "read": true, 00:15:59.250 "write": true, 00:15:59.250 "unmap": true, 00:15:59.250 "write_zeroes": true, 00:15:59.250 "flush": true, 00:15:59.250 "reset": true, 00:15:59.250 "compare": true, 00:15:59.250 "compare_and_write": false, 00:15:59.250 "abort": true, 00:15:59.250 "nvme_admin": true, 00:15:59.250 "nvme_io": true 00:15:59.250 }, 00:15:59.250 "driver_specific": { 00:15:59.250 "nvme": [ 00:15:59.250 { 00:15:59.250 "pci_address": "0000:00:07.0", 00:15:59.250 "trid": { 00:15:59.250 "trtype": "PCIe", 00:15:59.250 "traddr": "0000:00:07.0" 00:15:59.250 }, 00:15:59.250 "ctrlr_data": { 00:15:59.250 "cntlid": 0, 00:15:59.250 "vendor_id": "0x1b36", 00:15:59.250 "model_number": "QEMU NVMe Ctrl", 00:15:59.250 "serial_number": "12341", 00:15:59.250 "firmware_revision": "8.0.0", 00:15:59.250 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:59.250 "oacs": { 00:15:59.250 "security": 0, 00:15:59.250 "format": 1, 00:15:59.250 "firmware": 0, 00:15:59.250 "ns_manage": 1 00:15:59.250 }, 00:15:59.250 "multi_ctrlr": false, 00:15:59.251 "ana_reporting": false 00:15:59.251 }, 00:15:59.251 "vs": { 00:15:59.251 "nvme_version": "1.4" 00:15:59.251 }, 00:15:59.251 "ns_data": { 00:15:59.251 "id": 1, 00:15:59.251 "can_share": false 00:15:59.251 } 00:15:59.251 } 00:15:59.251 ], 00:15:59.251 "mp_policy": "active_passive" 00:15:59.251 } 00:15:59.251 } 00:15:59.251 ]' 00:15:59.251 00:10:13 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:59.251 00:10:13 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:59.251 00:10:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:59.251 00:10:13 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:59.251 00:10:13 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:59.251 00:10:13 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:59.251 00:10:13 -- ftl/common.sh@63 -- # base_size=5120 00:15:59.251 00:10:13 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:59.251 00:10:13 -- ftl/common.sh@67 -- # clear_lvols 00:15:59.251 00:10:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:59.251 00:10:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:59.509 00:10:14 -- ftl/common.sh@28 -- # stores=cc16a321-578d-403c-88d3-b4dfcdde7a92 00:15:59.509 00:10:14 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:59.509 00:10:14 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc16a321-578d-403c-88d3-b4dfcdde7a92 00:15:59.768 00:10:14 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:00.026 00:10:14 -- ftl/common.sh@68 -- # lvs=585eed1c-69a3-4099-ad86-79951c1f8520 00:16:00.026 00:10:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 585eed1c-69a3-4099-ad86-79951c1f8520 00:16:00.026 00:10:14 -- ftl/restore.sh@43 -- # split_bdev=f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.026 00:10:14 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:16:00.026 00:10:14 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.026 00:10:14 -- ftl/common.sh@35 -- # local name=nvc0 00:16:00.026 00:10:14 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:00.026 00:10:14 -- ftl/common.sh@37 -- # local base_bdev=f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.026 00:10:14 -- ftl/common.sh@38 -- # local cache_size= 00:16:00.026 00:10:14 -- ftl/common.sh@41 -- # get_bdev_size f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.026 00:10:14 -- common/autotest_common.sh@1367 -- # local bdev_name=f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.026 00:10:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:00.026 00:10:14 -- common/autotest_common.sh@1369 -- # local bs 00:16:00.026 00:10:14 -- common/autotest_common.sh@1370 -- # local nb 00:16:00.026 00:10:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.285 00:10:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:00.285 { 00:16:00.285 "name": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:00.285 "aliases": [ 00:16:00.285 "lvs/nvme0n1p0" 00:16:00.285 ], 00:16:00.285 "product_name": "Logical Volume", 00:16:00.285 "block_size": 4096, 00:16:00.285 "num_blocks": 26476544, 00:16:00.285 "uuid": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:00.285 "assigned_rate_limits": { 00:16:00.285 "rw_ios_per_sec": 0, 00:16:00.285 "rw_mbytes_per_sec": 0, 00:16:00.285 "r_mbytes_per_sec": 0, 00:16:00.285 "w_mbytes_per_sec": 0 00:16:00.285 }, 00:16:00.285 "claimed": false, 00:16:00.285 "zoned": false, 00:16:00.285 "supported_io_types": { 00:16:00.285 "read": true, 00:16:00.285 "write": true, 00:16:00.285 "unmap": true, 00:16:00.285 "write_zeroes": true, 00:16:00.285 "flush": false, 00:16:00.285 "reset": true, 00:16:00.285 "compare": false, 00:16:00.285 "compare_and_write": false, 00:16:00.285 "abort": false, 00:16:00.285 "nvme_admin": false, 00:16:00.285 "nvme_io": false 00:16:00.285 }, 00:16:00.285 "driver_specific": { 00:16:00.285 "lvol": { 00:16:00.285 "lvol_store_uuid": "585eed1c-69a3-4099-ad86-79951c1f8520", 00:16:00.285 "base_bdev": "nvme0n1", 00:16:00.285 "thin_provision": true, 00:16:00.285 "snapshot": false, 00:16:00.285 "clone": false, 00:16:00.285 "esnap_clone": false 00:16:00.285 } 00:16:00.285 } 00:16:00.285 } 00:16:00.285 ]' 00:16:00.285 00:10:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:00.285 00:10:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:00.285 00:10:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:00.285 00:10:14 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:00.285 00:10:14 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:00.285 00:10:14 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:00.285 00:10:14 -- ftl/common.sh@41 -- # local base_size=5171 00:16:00.285 00:10:14 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:00.285 00:10:14 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:00.544 00:10:15 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:00.544 00:10:15 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:00.544 00:10:15 -- ftl/common.sh@48 -- # get_bdev_size f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.544 00:10:15 -- common/autotest_common.sh@1367 -- # local bdev_name=f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.544 00:10:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:00.544 00:10:15 -- common/autotest_common.sh@1369 -- # local bs 00:16:00.544 00:10:15 -- common/autotest_common.sh@1370 -- # local nb 00:16:00.544 00:10:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:00.850 00:10:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:00.850 { 00:16:00.850 "name": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:00.850 "aliases": [ 00:16:00.850 "lvs/nvme0n1p0" 00:16:00.850 ], 00:16:00.850 "product_name": "Logical Volume", 00:16:00.850 "block_size": 4096, 00:16:00.850 "num_blocks": 26476544, 00:16:00.850 "uuid": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:00.850 "assigned_rate_limits": { 00:16:00.850 "rw_ios_per_sec": 0, 00:16:00.850 "rw_mbytes_per_sec": 0, 00:16:00.850 "r_mbytes_per_sec": 0, 00:16:00.850 "w_mbytes_per_sec": 0 00:16:00.850 }, 00:16:00.850 "claimed": false, 00:16:00.850 "zoned": false, 00:16:00.850 "supported_io_types": { 00:16:00.850 "read": true, 00:16:00.850 "write": true, 00:16:00.850 "unmap": true, 00:16:00.850 "write_zeroes": true, 00:16:00.850 "flush": false, 00:16:00.850 "reset": true, 00:16:00.850 "compare": false, 00:16:00.850 "compare_and_write": false, 00:16:00.850 "abort": false, 00:16:00.850 "nvme_admin": false, 00:16:00.850 "nvme_io": false 00:16:00.850 }, 00:16:00.850 "driver_specific": { 00:16:00.850 "lvol": { 00:16:00.850 "lvol_store_uuid": "585eed1c-69a3-4099-ad86-79951c1f8520", 00:16:00.850 "base_bdev": "nvme0n1", 00:16:00.850 "thin_provision": true, 00:16:00.850 "snapshot": false, 00:16:00.850 "clone": false, 00:16:00.850 "esnap_clone": false 00:16:00.850 } 00:16:00.850 } 00:16:00.850 } 00:16:00.850 ]' 00:16:00.850 00:10:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:00.850 00:10:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:00.850 00:10:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:00.850 00:10:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:00.850 00:10:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:00.850 00:10:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:00.850 00:10:15 -- ftl/common.sh@48 -- # cache_size=5171 00:16:00.850 00:10:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:01.113 00:10:15 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:01.113 00:10:15 -- ftl/restore.sh@48 -- # get_bdev_size f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:01.113 00:10:15 -- common/autotest_common.sh@1367 -- # local bdev_name=f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:01.113 00:10:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:01.113 00:10:15 -- common/autotest_common.sh@1369 -- # local bs 00:16:01.113 00:10:15 -- common/autotest_common.sh@1370 -- # local nb 00:16:01.113 00:10:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f476a8ea-e9cc-44aa-ba31-1742164ade47 00:16:01.372 00:10:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:01.372 { 00:16:01.372 "name": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:01.372 "aliases": [ 00:16:01.372 "lvs/nvme0n1p0" 00:16:01.372 ], 00:16:01.372 "product_name": "Logical Volume", 00:16:01.372 "block_size": 4096, 00:16:01.372 "num_blocks": 26476544, 00:16:01.372 "uuid": "f476a8ea-e9cc-44aa-ba31-1742164ade47", 00:16:01.372 "assigned_rate_limits": { 00:16:01.372 "rw_ios_per_sec": 0, 00:16:01.372 "rw_mbytes_per_sec": 0, 00:16:01.372 "r_mbytes_per_sec": 0, 00:16:01.372 "w_mbytes_per_sec": 0 00:16:01.372 }, 00:16:01.372 "claimed": false, 00:16:01.372 "zoned": false, 00:16:01.372 "supported_io_types": { 00:16:01.372 "read": true, 00:16:01.372 "write": true, 00:16:01.372 "unmap": true, 00:16:01.372 "write_zeroes": true, 00:16:01.372 "flush": false, 00:16:01.372 "reset": true, 00:16:01.372 "compare": false, 00:16:01.372 "compare_and_write": false, 00:16:01.372 "abort": false, 00:16:01.372 "nvme_admin": false, 00:16:01.372 "nvme_io": false 00:16:01.372 }, 00:16:01.372 "driver_specific": { 00:16:01.372 "lvol": { 00:16:01.372 "lvol_store_uuid": "585eed1c-69a3-4099-ad86-79951c1f8520", 00:16:01.372 "base_bdev": "nvme0n1", 00:16:01.372 "thin_provision": true, 00:16:01.372 "snapshot": false, 00:16:01.372 "clone": false, 00:16:01.372 "esnap_clone": false 00:16:01.372 } 00:16:01.372 } 00:16:01.372 } 00:16:01.372 ]' 00:16:01.372 00:10:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:01.372 00:10:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:01.372 00:10:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:01.372 00:10:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:01.372 00:10:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:01.372 00:10:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:01.372 00:10:15 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:01.372 00:10:15 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f476a8ea-e9cc-44aa-ba31-1742164ade47 --l2p_dram_limit 10' 00:16:01.372 00:10:15 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:01.372 00:10:15 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:16:01.372 00:10:15 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:01.372 00:10:15 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:01.372 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:01.372 00:10:15 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f476a8ea-e9cc-44aa-ba31-1742164ade47 --l2p_dram_limit 10 -c nvc0n1p0 00:16:01.632 [2024-11-28 00:10:15.978603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.978643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.632 [2024-11-28 00:10:15.978655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.632 [2024-11-28 00:10:15.978662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.978702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.978709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.632 [2024-11-28 00:10:15.978718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:01.632 [2024-11-28 00:10:15.978724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.978740] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.632 [2024-11-28 00:10:15.978982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.632 [2024-11-28 00:10:15.978995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.979001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.632 [2024-11-28 00:10:15.979009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:16:01.632 [2024-11-28 00:10:15.979017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.979040] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:16:01.632 [2024-11-28 00:10:15.980036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.980060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:01.632 [2024-11-28 00:10:15.980068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:01.632 [2024-11-28 00:10:15.980076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.984749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.984780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.632 [2024-11-28 00:10:15.984788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:16:01.632 [2024-11-28 00:10:15.984796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.984861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.984869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.632 [2024-11-28 00:10:15.984875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:01.632 [2024-11-28 00:10:15.984883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.984916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.984928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.632 [2024-11-28 00:10:15.984936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.632 [2024-11-28 00:10:15.984943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.984962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:01.632 [2024-11-28 00:10:15.986223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.986250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.632 [2024-11-28 00:10:15.986259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.265 ms 00:16:01.632 [2024-11-28 00:10:15.986265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.986294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.986301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.632 [2024-11-28 00:10:15.986312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:01.632 [2024-11-28 00:10:15.986317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.986331] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:01.632 [2024-11-28 00:10:15.986434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:01.632 [2024-11-28 00:10:15.986445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.632 [2024-11-28 00:10:15.986458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:01.632 [2024-11-28 00:10:15.986472] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.632 [2024-11-28 00:10:15.986479] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.632 [2024-11-28 00:10:15.986486] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:01.632 [2024-11-28 00:10:15.986491] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.632 [2024-11-28 00:10:15.986498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:01.632 [2024-11-28 00:10:15.986503] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:01.632 [2024-11-28 00:10:15.986512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.986518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.632 [2024-11-28 00:10:15.986525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:16:01.632 [2024-11-28 00:10:15.986530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.986581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.632 [2024-11-28 00:10:15.986587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.632 [2024-11-28 00:10:15.986596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:01.632 [2024-11-28 00:10:15.986602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.632 [2024-11-28 00:10:15.986661] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.632 [2024-11-28 00:10:15.986668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.632 [2024-11-28 00:10:15.986676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.632 [2024-11-28 00:10:15.986681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.632 [2024-11-28 00:10:15.986688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.632 [2024-11-28 00:10:15.986693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.632 [2024-11-28 00:10:15.986699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:01.632 [2024-11-28 00:10:15.986704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.632 [2024-11-28 00:10:15.986711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:01.632 [2024-11-28 00:10:15.986716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.632 [2024-11-28 00:10:15.986722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.632 [2024-11-28 00:10:15.986727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:01.632 [2024-11-28 00:10:15.986734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.632 [2024-11-28 00:10:15.986739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.632 [2024-11-28 00:10:15.986745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:01.633 [2024-11-28 00:10:15.986750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.633 [2024-11-28 00:10:15.986762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:01.633 [2024-11-28 00:10:15.986768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:01.633 [2024-11-28 00:10:15.986779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:01.633 [2024-11-28 00:10:15.986784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.633 [2024-11-28 00:10:15.986795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.633 [2024-11-28 00:10:15.986811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.633 [2024-11-28 00:10:15.986829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.633 [2024-11-28 00:10:15.986846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.633 [2024-11-28 00:10:15.986861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.633 [2024-11-28 00:10:15.986872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.633 [2024-11-28 00:10:15.986878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:01.633 [2024-11-28 00:10:15.986884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.633 [2024-11-28 00:10:15.986890] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.633 [2024-11-28 00:10:15.986898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.633 [2024-11-28 00:10:15.986905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.633 [2024-11-28 00:10:15.986921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.633 [2024-11-28 00:10:15.986927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.633 [2024-11-28 00:10:15.986934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.633 [2024-11-28 00:10:15.986940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.633 [2024-11-28 00:10:15.986948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.633 [2024-11-28 00:10:15.986954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.633 [2024-11-28 00:10:15.986961] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.633 [2024-11-28 00:10:15.986969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.633 [2024-11-28 00:10:15.986977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:01.633 [2024-11-28 00:10:15.986984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:01.633 [2024-11-28 00:10:15.986992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:01.633 [2024-11-28 00:10:15.986999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:01.633 [2024-11-28 00:10:15.987006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:01.633 [2024-11-28 00:10:15.987012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:01.633 [2024-11-28 00:10:15.987019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:01.633 [2024-11-28 00:10:15.987025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:01.633 [2024-11-28 00:10:15.987033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:01.633 [2024-11-28 00:10:15.987039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:01.633 [2024-11-28 00:10:15.987047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:01.633 [2024-11-28 00:10:15.987053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:01.633 [2024-11-28 00:10:15.987061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:01.633 [2024-11-28 00:10:15.987066] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.633 [2024-11-28 00:10:15.987074] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.633 [2024-11-28 00:10:15.987081] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.633 [2024-11-28 00:10:15.987089] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.633 [2024-11-28 00:10:15.987095] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.633 [2024-11-28 00:10:15.987102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.633 [2024-11-28 00:10:15.987109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:15.987116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.633 [2024-11-28 00:10:15.987122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:16:01.633 [2024-11-28 00:10:15.987129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:15.992464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:15.992492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:01.633 [2024-11-28 00:10:15.992499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.302 ms 00:16:01.633 [2024-11-28 00:10:15.992506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:15.992572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:15.992580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:01.633 [2024-11-28 00:10:15.992586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:01.633 [2024-11-28 00:10:15.992592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.000195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.000316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:01.633 [2024-11-28 00:10:16.000330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.574 ms 00:16:01.633 [2024-11-28 00:10:16.000337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.000358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.000381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:01.633 [2024-11-28 00:10:16.000387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:01.633 [2024-11-28 00:10:16.000394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.000681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.000705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:01.633 [2024-11-28 00:10:16.000717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:16:01.633 [2024-11-28 00:10:16.000723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.000804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.000817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:01.633 [2024-11-28 00:10:16.000823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:01.633 [2024-11-28 00:10:16.000830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.005557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.005588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:01.633 [2024-11-28 00:10:16.005595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.713 ms 00:16:01.633 [2024-11-28 00:10:16.005602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.012132] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:01.633 [2024-11-28 00:10:16.014413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.014526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:01.633 [2024-11-28 00:10:16.014540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.762 ms 00:16:01.633 [2024-11-28 00:10:16.014549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.067951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.633 [2024-11-28 00:10:16.067985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:01.633 [2024-11-28 00:10:16.067996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.377 ms 00:16:01.633 [2024-11-28 00:10:16.068002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.633 [2024-11-28 00:10:16.068033] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:01.633 [2024-11-28 00:10:16.068041] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:04.167 [2024-11-28 00:10:18.687089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.687275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:04.167 [2024-11-28 00:10:18.687300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2619.040 ms 00:16:04.167 [2024-11-28 00:10:18.687309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.687508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.687520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:04.167 [2024-11-28 00:10:18.687530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:16:04.167 [2024-11-28 00:10:18.687538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.691303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.691345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:04.167 [2024-11-28 00:10:18.691359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:16:04.167 [2024-11-28 00:10:18.691377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.694644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.694675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:04.167 [2024-11-28 00:10:18.694688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:16:04.167 [2024-11-28 00:10:18.694696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.694867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.694877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:04.167 [2024-11-28 00:10:18.694887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:04.167 [2024-11-28 00:10:18.694896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.716166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.716203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:04.167 [2024-11-28 00:10:18.716214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.248 ms 00:16:04.167 [2024-11-28 00:10:18.716222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.720445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.720478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:04.167 [2024-11-28 00:10:18.720491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:16:04.167 [2024-11-28 00:10:18.720498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.721700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.721825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:04.167 [2024-11-28 00:10:18.721843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:16:04.167 [2024-11-28 00:10:18.721850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.725983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.726016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:04.167 [2024-11-28 00:10:18.726027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.106 ms 00:16:04.167 [2024-11-28 00:10:18.726033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.726075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.726085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:04.167 [2024-11-28 00:10:18.726095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:04.167 [2024-11-28 00:10:18.726102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.726165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.167 [2024-11-28 00:10:18.726173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:04.167 [2024-11-28 00:10:18.726184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:04.167 [2024-11-28 00:10:18.726191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.167 [2024-11-28 00:10:18.727030] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2748.037 ms, result 0 00:16:04.167 { 00:16:04.167 "name": "ftl0", 00:16:04.167 "uuid": "7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc" 00:16:04.167 } 00:16:04.167 00:10:18 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:16:04.167 00:10:18 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:04.426 00:10:18 -- ftl/restore.sh@63 -- # echo ']}' 00:16:04.426 00:10:18 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:04.686 [2024-11-28 00:10:19.107049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.107207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:04.686 [2024-11-28 00:10:19.107274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:04.686 [2024-11-28 00:10:19.107301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.107344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:04.686 [2024-11-28 00:10:19.107818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.107837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:04.686 [2024-11-28 00:10:19.107848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:16:04.686 [2024-11-28 00:10:19.107856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.108111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.108121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:04.686 [2024-11-28 00:10:19.108133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:16:04.686 [2024-11-28 00:10:19.108145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.111407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.111428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:04.686 [2024-11-28 00:10:19.111445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:16:04.686 [2024-11-28 00:10:19.111452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.117550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.117577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:04.686 [2024-11-28 00:10:19.117589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.078 ms 00:16:04.686 [2024-11-28 00:10:19.117597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.119834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.119864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:04.686 [2024-11-28 00:10:19.119875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:16:04.686 [2024-11-28 00:10:19.119882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.123905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.124016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:04.686 [2024-11-28 00:10:19.124033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.987 ms 00:16:04.686 [2024-11-28 00:10:19.124040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.124160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.124170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:04.686 [2024-11-28 00:10:19.124180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:04.686 [2024-11-28 00:10:19.124187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.125933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.125965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:04.686 [2024-11-28 00:10:19.125975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:16:04.686 [2024-11-28 00:10:19.125982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.127491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.127518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:04.686 [2024-11-28 00:10:19.127528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:16:04.686 [2024-11-28 00:10:19.127535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.128595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.128623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:04.686 [2024-11-28 00:10:19.128634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:16:04.686 [2024-11-28 00:10:19.128640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.129812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.686 [2024-11-28 00:10:19.129842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:04.686 [2024-11-28 00:10:19.129852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:16:04.686 [2024-11-28 00:10:19.129858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.686 [2024-11-28 00:10:19.129889] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:04.686 [2024-11-28 00:10:19.129902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.129990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:04.686 [2024-11-28 00:10:19.130148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:04.687 [2024-11-28 00:10:19.130785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:04.687 [2024-11-28 00:10:19.130793] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:16:04.687 [2024-11-28 00:10:19.130801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:04.687 [2024-11-28 00:10:19.130809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:04.687 [2024-11-28 00:10:19.130816] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:04.687 [2024-11-28 00:10:19.130824] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:04.687 [2024-11-28 00:10:19.130831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:04.687 [2024-11-28 00:10:19.130842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:04.687 [2024-11-28 00:10:19.130849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:04.687 [2024-11-28 00:10:19.130857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:04.687 [2024-11-28 00:10:19.130863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:04.687 [2024-11-28 00:10:19.130871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.687 [2024-11-28 00:10:19.130878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:04.687 [2024-11-28 00:10:19.130888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:16:04.687 [2024-11-28 00:10:19.130897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.687 [2024-11-28 00:10:19.132339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.687 [2024-11-28 00:10:19.132372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:04.687 [2024-11-28 00:10:19.132384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:16:04.687 [2024-11-28 00:10:19.132391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.687 [2024-11-28 00:10:19.132445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.687 [2024-11-28 00:10:19.132454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:04.687 [2024-11-28 00:10:19.132465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:04.687 [2024-11-28 00:10:19.132472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.687 [2024-11-28 00:10:19.137632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.137750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:04.688 [2024-11-28 00:10:19.137768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.137780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.137836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.137844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:04.688 [2024-11-28 00:10:19.137855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.137862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.137928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.137937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:04.688 [2024-11-28 00:10:19.137946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.137953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.137972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.137979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:04.688 [2024-11-28 00:10:19.137988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.137996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.147075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.147115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:04.688 [2024-11-28 00:10:19.147127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.147134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.150756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.150787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:04.688 [2024-11-28 00:10:19.150801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.150808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.150850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.150859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:04.688 [2024-11-28 00:10:19.150868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.150875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.150921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.150929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:04.688 [2024-11-28 00:10:19.150939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.150945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.151015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.151024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:04.688 [2024-11-28 00:10:19.151034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.151040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.151070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.151078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:04.688 [2024-11-28 00:10:19.151087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.151093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.151130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.151138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:04.688 [2024-11-28 00:10:19.151150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.151157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.151199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.688 [2024-11-28 00:10:19.151208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:04.688 [2024-11-28 00:10:19.151217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.688 [2024-11-28 00:10:19.151224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.688 [2024-11-28 00:10:19.151359] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.273 ms, result 0 00:16:04.688 true 00:16:04.688 00:10:19 -- ftl/restore.sh@66 -- # killprocess 83773 00:16:04.688 00:10:19 -- common/autotest_common.sh@936 -- # '[' -z 83773 ']' 00:16:04.688 00:10:19 -- common/autotest_common.sh@940 -- # kill -0 83773 00:16:04.688 00:10:19 -- common/autotest_common.sh@941 -- # uname 00:16:04.688 00:10:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.688 00:10:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83773 00:16:04.688 killing process with pid 83773 00:16:04.688 00:10:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.688 00:10:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.688 00:10:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83773' 00:16:04.688 00:10:19 -- common/autotest_common.sh@955 -- # kill 83773 00:16:04.688 00:10:19 -- common/autotest_common.sh@960 -- # wait 83773 00:16:09.955 00:10:24 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:16:14.141 262144+0 records in 00:16:14.141 262144+0 records out 00:16:14.141 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00127 s, 268 MB/s 00:16:14.141 00:10:28 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:16:15.518 00:10:30 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:15.776 [2024-11-28 00:10:30.148678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:15.776 [2024-11-28 00:10:30.148763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84014 ] 00:16:15.776 [2024-11-28 00:10:30.287880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.776 [2024-11-28 00:10:30.315669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.036 [2024-11-28 00:10:30.395204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:16.036 [2024-11-28 00:10:30.395437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:16.036 [2024-11-28 00:10:30.534373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.534498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:16.036 [2024-11-28 00:10:30.534513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.036 [2024-11-28 00:10:30.534520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.534566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.534574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.036 [2024-11-28 00:10:30.534580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:16.036 [2024-11-28 00:10:30.534587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.534605] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:16.036 [2024-11-28 00:10:30.534784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:16.036 [2024-11-28 00:10:30.534795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.534803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.036 [2024-11-28 00:10:30.534813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:16.036 [2024-11-28 00:10:30.534818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.535743] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:16.036 [2024-11-28 00:10:30.537775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.537891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:16.036 [2024-11-28 00:10:30.537907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:16:16.036 [2024-11-28 00:10:30.537913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.537952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.537959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:16.036 [2024-11-28 00:10:30.537969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:16.036 [2024-11-28 00:10:30.537974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.542385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.542412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.036 [2024-11-28 00:10:30.542419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.374 ms 00:16:16.036 [2024-11-28 00:10:30.542425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.036 [2024-11-28 00:10:30.542477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.036 [2024-11-28 00:10:30.542487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.037 [2024-11-28 00:10:30.542493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:16.037 [2024-11-28 00:10:30.542498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.542531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.037 [2024-11-28 00:10:30.542542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:16.037 [2024-11-28 00:10:30.542552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:16.037 [2024-11-28 00:10:30.542557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.542572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.037 [2024-11-28 00:10:30.543721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.037 [2024-11-28 00:10:30.543746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.037 [2024-11-28 00:10:30.543753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:16:16.037 [2024-11-28 00:10:30.543758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.543787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.037 [2024-11-28 00:10:30.543794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:16.037 [2024-11-28 00:10:30.543802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:16.037 [2024-11-28 00:10:30.543811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.543824] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:16.037 [2024-11-28 00:10:30.543838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:16.037 [2024-11-28 00:10:30.543863] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:16.037 [2024-11-28 00:10:30.543873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:16.037 [2024-11-28 00:10:30.543928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:16.037 [2024-11-28 00:10:30.543938] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:16.037 [2024-11-28 00:10:30.543947] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:16.037 [2024-11-28 00:10:30.543954] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:16.037 [2024-11-28 00:10:30.543961] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:16.037 [2024-11-28 00:10:30.543968] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:16.037 [2024-11-28 00:10:30.543973] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:16.037 [2024-11-28 00:10:30.543979] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:16.037 [2024-11-28 00:10:30.543984] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:16.037 [2024-11-28 00:10:30.543990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.037 [2024-11-28 00:10:30.543995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:16.037 [2024-11-28 00:10:30.544004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:16.037 [2024-11-28 00:10:30.544013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.544057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.037 [2024-11-28 00:10:30.544063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:16.037 [2024-11-28 00:10:30.544068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:16.037 [2024-11-28 00:10:30.544076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.037 [2024-11-28 00:10:30.544128] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:16.037 [2024-11-28 00:10:30.544135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:16.037 [2024-11-28 00:10:30.544141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:16.037 [2024-11-28 00:10:30.544161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:16.037 [2024-11-28 00:10:30.544177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.037 [2024-11-28 00:10:30.544187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:16.037 [2024-11-28 00:10:30.544193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:16.037 [2024-11-28 00:10:30.544198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.037 [2024-11-28 00:10:30.544207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:16.037 [2024-11-28 00:10:30.544212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:16.037 [2024-11-28 00:10:30.544217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:16.037 [2024-11-28 00:10:30.544227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:16.037 [2024-11-28 00:10:30.544231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:16.037 [2024-11-28 00:10:30.544243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:16.037 [2024-11-28 00:10:30.544248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:16.037 [2024-11-28 00:10:30.544258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:16.037 [2024-11-28 00:10:30.544272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:16.037 [2024-11-28 00:10:30.544286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:16.037 [2024-11-28 00:10:30.544300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:16.037 [2024-11-28 00:10:30.544317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.037 [2024-11-28 00:10:30.544327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:16.037 [2024-11-28 00:10:30.544332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:16.037 [2024-11-28 00:10:30.544336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.037 [2024-11-28 00:10:30.544341] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:16.037 [2024-11-28 00:10:30.544346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:16.037 [2024-11-28 00:10:30.544351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.037 [2024-11-28 00:10:30.544382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:16.037 [2024-11-28 00:10:30.544389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:16.037 [2024-11-28 00:10:30.544395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:16.037 [2024-11-28 00:10:30.544401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:16.037 [2024-11-28 00:10:30.544406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:16.037 [2024-11-28 00:10:30.544412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:16.037 [2024-11-28 00:10:30.544419] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:16.037 [2024-11-28 00:10:30.544428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.037 [2024-11-28 00:10:30.544437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:16.037 [2024-11-28 00:10:30.544444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:16.037 [2024-11-28 00:10:30.544450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:16.037 [2024-11-28 00:10:30.544456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:16.037 [2024-11-28 00:10:30.544462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:16.037 [2024-11-28 00:10:30.544468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:16.037 [2024-11-28 00:10:30.544474] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:16.037 [2024-11-28 00:10:30.544480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:16.037 [2024-11-28 00:10:30.544486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:16.037 [2024-11-28 00:10:30.544492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:16.037 [2024-11-28 00:10:30.544498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:16.037 [2024-11-28 00:10:30.544505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:16.038 [2024-11-28 00:10:30.544511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:16.038 [2024-11-28 00:10:30.544517] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:16.038 [2024-11-28 00:10:30.544525] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.038 [2024-11-28 00:10:30.544536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:16.038 [2024-11-28 00:10:30.544542] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:16.038 [2024-11-28 00:10:30.544548] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:16.038 [2024-11-28 00:10:30.544554] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:16.038 [2024-11-28 00:10:30.544561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.544567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:16.038 [2024-11-28 00:10:30.544573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:16:16.038 [2024-11-28 00:10:30.544581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.550057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.550163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.038 [2024-11-28 00:10:30.550214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.448 ms 00:16:16.038 [2024-11-28 00:10:30.550236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.550310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.550415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:16.038 [2024-11-28 00:10:30.550435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:16.038 [2024-11-28 00:10:30.550449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.568710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.568846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.038 [2024-11-28 00:10:30.568901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.213 ms 00:16:16.038 [2024-11-28 00:10:30.568924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.568983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.569007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.038 [2024-11-28 00:10:30.569030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:16.038 [2024-11-28 00:10:30.569053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.569465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.569580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.038 [2024-11-28 00:10:30.569649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:16:16.038 [2024-11-28 00:10:30.569677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.569838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.569876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.038 [2024-11-28 00:10:30.569936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:16.038 [2024-11-28 00:10:30.569991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.575684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.575801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.038 [2024-11-28 00:10:30.575862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.646 ms 00:16:16.038 [2024-11-28 00:10:30.576133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.578739] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:16.038 [2024-11-28 00:10:30.578905] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:16.038 [2024-11-28 00:10:30.578993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.579020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:16.038 [2024-11-28 00:10:30.579074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:16:16.038 [2024-11-28 00:10:30.579102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.604580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.604720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:16.038 [2024-11-28 00:10:30.604784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.232 ms 00:16:16.038 [2024-11-28 00:10:30.604795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.607036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.607073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:16.038 [2024-11-28 00:10:30.607083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:16:16.038 [2024-11-28 00:10:30.607090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.608881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.608909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:16.038 [2024-11-28 00:10:30.608918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:16:16.038 [2024-11-28 00:10:30.608929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.609109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.609128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:16.038 [2024-11-28 00:10:30.609138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:16.038 [2024-11-28 00:10:30.609145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.626287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.626328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:16.038 [2024-11-28 00:10:30.626338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.126 ms 00:16:16.038 [2024-11-28 00:10:30.626346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.633577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:16.038 [2024-11-28 00:10:30.635746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.635781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:16.038 [2024-11-28 00:10:30.635791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.342 ms 00:16:16.038 [2024-11-28 00:10:30.635802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.635855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.635866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:16.038 [2024-11-28 00:10:30.635875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.038 [2024-11-28 00:10:30.635886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.038 [2024-11-28 00:10:30.635934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.038 [2024-11-28 00:10:30.635948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:16.038 [2024-11-28 00:10:30.635958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:16.038 [2024-11-28 00:10:30.635966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.637089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-11-28 00:10:30.637218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:16.297 [2024-11-28 00:10:30.637233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:16:16.297 [2024-11-28 00:10:30.637241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.637267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-11-28 00:10:30.637275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:16.297 [2024-11-28 00:10:30.637283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.297 [2024-11-28 00:10:30.637294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.637337] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:16.297 [2024-11-28 00:10:30.637346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-11-28 00:10:30.637353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:16.297 [2024-11-28 00:10:30.637376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:16.297 [2024-11-28 00:10:30.637386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.640472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-11-28 00:10:30.640500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:16.297 [2024-11-28 00:10:30.640515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:16:16.297 [2024-11-28 00:10:30.640522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.640586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-11-28 00:10:30.640595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:16.297 [2024-11-28 00:10:30.640603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:16.297 [2024-11-28 00:10:30.640613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-11-28 00:10:30.641446] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.677 ms, result 0 00:16:17.233  [2024-11-28T00:10:32.770Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-28T00:10:33.705Z] Copying: 53/1024 [MB] (24 MBps) [2024-11-28T00:10:35.082Z] Copying: 80/1024 [MB] (26 MBps) [2024-11-28T00:10:36.019Z] Copying: 110/1024 [MB] (30 MBps) [2024-11-28T00:10:36.952Z] Copying: 136/1024 [MB] (26 MBps) [2024-11-28T00:10:37.884Z] Copying: 159/1024 [MB] (22 MBps) [2024-11-28T00:10:38.814Z] Copying: 175/1024 [MB] (16 MBps) [2024-11-28T00:10:39.754Z] Copying: 194/1024 [MB] (18 MBps) [2024-11-28T00:10:40.728Z] Copying: 212/1024 [MB] (18 MBps) [2024-11-28T00:10:41.664Z] Copying: 230/1024 [MB] (18 MBps) [2024-11-28T00:10:43.041Z] Copying: 252/1024 [MB] (21 MBps) [2024-11-28T00:10:43.977Z] Copying: 272/1024 [MB] (19 MBps) [2024-11-28T00:10:44.915Z] Copying: 289/1024 [MB] (17 MBps) [2024-11-28T00:10:45.850Z] Copying: 304/1024 [MB] (14 MBps) [2024-11-28T00:10:46.787Z] Copying: 340/1024 [MB] (36 MBps) [2024-11-28T00:10:47.724Z] Copying: 359/1024 [MB] (18 MBps) [2024-11-28T00:10:48.660Z] Copying: 380/1024 [MB] (21 MBps) [2024-11-28T00:10:50.037Z] Copying: 404/1024 [MB] (23 MBps) [2024-11-28T00:10:50.973Z] Copying: 449/1024 [MB] (45 MBps) [2024-11-28T00:10:51.909Z] Copying: 476/1024 [MB] (27 MBps) [2024-11-28T00:10:52.844Z] Copying: 505/1024 [MB] (28 MBps) [2024-11-28T00:10:53.780Z] Copying: 532/1024 [MB] (26 MBps) [2024-11-28T00:10:54.714Z] Copying: 554/1024 [MB] (21 MBps) [2024-11-28T00:10:56.089Z] Copying: 577/1024 [MB] (23 MBps) [2024-11-28T00:10:56.656Z] Copying: 604/1024 [MB] (26 MBps) [2024-11-28T00:10:58.029Z] Copying: 624/1024 [MB] (19 MBps) [2024-11-28T00:10:58.964Z] Copying: 645/1024 [MB] (21 MBps) [2024-11-28T00:10:59.898Z] Copying: 673/1024 [MB] (27 MBps) [2024-11-28T00:11:00.832Z] Copying: 696/1024 [MB] (22 MBps) [2024-11-28T00:11:01.766Z] Copying: 734/1024 [MB] (37 MBps) [2024-11-28T00:11:02.700Z] Copying: 779/1024 [MB] (45 MBps) [2024-11-28T00:11:03.705Z] Copying: 826/1024 [MB] (46 MBps) [2024-11-28T00:11:05.080Z] Copying: 874/1024 [MB] (47 MBps) [2024-11-28T00:11:06.012Z] Copying: 921/1024 [MB] (47 MBps) [2024-11-28T00:11:06.946Z] Copying: 966/1024 [MB] (45 MBps) [2024-11-28T00:11:06.946Z] Copying: 1012/1024 [MB] (45 MBps) [2024-11-28T00:11:06.947Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-28 00:11:06.916246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.916291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:52.345 [2024-11-28 00:11:06.916309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.345 [2024-11-28 00:11:06.916317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.916336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:52.345 [2024-11-28 00:11:06.916782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.916803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:52.345 [2024-11-28 00:11:06.916812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:16:52.345 [2024-11-28 00:11:06.916819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.918201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.918236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:52.345 [2024-11-28 00:11:06.918245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:16:52.345 [2024-11-28 00:11:06.918253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.930656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.930785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:52.345 [2024-11-28 00:11:06.930806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:16:52.345 [2024-11-28 00:11:06.930813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.936904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.936937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:52.345 [2024-11-28 00:11:06.936948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:16:52.345 [2024-11-28 00:11:06.936956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.938251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.938354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:52.345 [2024-11-28 00:11:06.938377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:16:52.345 [2024-11-28 00:11:06.938384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.941727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.941843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:52.345 [2024-11-28 00:11:06.941861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:16:52.345 [2024-11-28 00:11:06.941868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.941973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.941982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:52.345 [2024-11-28 00:11:06.941991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:52.345 [2024-11-28 00:11:06.941998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.943626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.345 [2024-11-28 00:11:06.943655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:52.345 [2024-11-28 00:11:06.943664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:16:52.345 [2024-11-28 00:11:06.943672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.345 [2024-11-28 00:11:06.945069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.605 [2024-11-28 00:11:06.945179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:52.605 [2024-11-28 00:11:06.945192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:16:52.605 [2024-11-28 00:11:06.945199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.605 [2024-11-28 00:11:06.946353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.605 [2024-11-28 00:11:06.946404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:52.605 [2024-11-28 00:11:06.946412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:16:52.605 [2024-11-28 00:11:06.946419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.605 [2024-11-28 00:11:06.947518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.605 [2024-11-28 00:11:06.947548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:52.605 [2024-11-28 00:11:06.947557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:16:52.605 [2024-11-28 00:11:06.947565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.605 [2024-11-28 00:11:06.947592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:52.605 [2024-11-28 00:11:06.947611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.947998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:52.605 [2024-11-28 00:11:06.948176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:52.606 [2024-11-28 00:11:06.948360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:52.606 [2024-11-28 00:11:06.948640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:16:52.606 [2024-11-28 00:11:06.948670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:52.606 [2024-11-28 00:11:06.948688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:52.606 [2024-11-28 00:11:06.948707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:52.606 [2024-11-28 00:11:06.948829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:52.606 [2024-11-28 00:11:06.948852] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:52.606 [2024-11-28 00:11:06.948870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:52.606 [2024-11-28 00:11:06.948897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:52.606 [2024-11-28 00:11:06.948915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:52.606 [2024-11-28 00:11:06.948932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:52.606 [2024-11-28 00:11:06.948951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.606 [2024-11-28 00:11:06.949004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:52.606 [2024-11-28 00:11:06.949027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:16:52.606 [2024-11-28 00:11:06.949045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.950395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.606 [2024-11-28 00:11:06.950484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:52.606 [2024-11-28 00:11:06.950536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:16:52.606 [2024-11-28 00:11:06.950557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.950617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.606 [2024-11-28 00:11:06.950667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:52.606 [2024-11-28 00:11:06.950701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:52.606 [2024-11-28 00:11:06.950721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.955555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.955653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.606 [2024-11-28 00:11:06.955702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.955731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.955815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.955827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.606 [2024-11-28 00:11:06.955835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.955846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.955911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.955921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.606 [2024-11-28 00:11:06.955929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.955936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.955950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.955958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.606 [2024-11-28 00:11:06.955965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.955972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.963954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.963990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.606 [2024-11-28 00:11:06.963999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.964006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.606 [2024-11-28 00:11:06.967551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.606 [2024-11-28 00:11:06.967611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.606 [2024-11-28 00:11:06.967673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.606 [2024-11-28 00:11:06.967769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:52.606 [2024-11-28 00:11:06.967820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.606 [2024-11-28 00:11:06.967879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.967927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.606 [2024-11-28 00:11:06.967936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.606 [2024-11-28 00:11:06.967943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.606 [2024-11-28 00:11:06.967950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.606 [2024-11-28 00:11:06.968059] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.780 ms, result 0 00:16:53.541 00:16:53.541 00:16:53.541 00:11:07 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:16:53.541 [2024-11-28 00:11:08.049949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:53.541 [2024-11-28 00:11:08.050213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84409 ] 00:16:53.799 [2024-11-28 00:11:08.195013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.799 [2024-11-28 00:11:08.224638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.799 [2024-11-28 00:11:08.307255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:53.799 [2024-11-28 00:11:08.307323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:54.059 [2024-11-28 00:11:08.452213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.452255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:54.059 [2024-11-28 00:11:08.452268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.059 [2024-11-28 00:11:08.452276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.452320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.452333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.059 [2024-11-28 00:11:08.452341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:54.059 [2024-11-28 00:11:08.452353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.452388] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:54.059 [2024-11-28 00:11:08.452614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:54.059 [2024-11-28 00:11:08.452627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.452636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.059 [2024-11-28 00:11:08.452645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:16:54.059 [2024-11-28 00:11:08.452655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.453669] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:54.059 [2024-11-28 00:11:08.455784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.455926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:54.059 [2024-11-28 00:11:08.455947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:16:54.059 [2024-11-28 00:11:08.455955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.456009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.456019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:54.059 [2024-11-28 00:11:08.456031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:54.059 [2024-11-28 00:11:08.456037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.460740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.460771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.059 [2024-11-28 00:11:08.460781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:16:54.059 [2024-11-28 00:11:08.460790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.460853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.460862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.059 [2024-11-28 00:11:08.460872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:54.059 [2024-11-28 00:11:08.460879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.460919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.460928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:54.059 [2024-11-28 00:11:08.460937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.059 [2024-11-28 00:11:08.460944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.460965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.059 [2024-11-28 00:11:08.462241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.462270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.059 [2024-11-28 00:11:08.462279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:16:54.059 [2024-11-28 00:11:08.462287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.462319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.462327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:54.059 [2024-11-28 00:11:08.462337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:54.059 [2024-11-28 00:11:08.462347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.462382] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:54.059 [2024-11-28 00:11:08.462403] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:54.059 [2024-11-28 00:11:08.462438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:54.059 [2024-11-28 00:11:08.462454] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:54.059 [2024-11-28 00:11:08.462527] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:54.059 [2024-11-28 00:11:08.462539] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:54.059 [2024-11-28 00:11:08.462551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:54.059 [2024-11-28 00:11:08.462560] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:54.059 [2024-11-28 00:11:08.462569] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:54.059 [2024-11-28 00:11:08.462579] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:54.059 [2024-11-28 00:11:08.462586] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:54.059 [2024-11-28 00:11:08.462593] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:54.059 [2024-11-28 00:11:08.462599] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:54.059 [2024-11-28 00:11:08.462606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.462613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:54.059 [2024-11-28 00:11:08.462621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:16:54.059 [2024-11-28 00:11:08.462629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.462688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.059 [2024-11-28 00:11:08.462696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:54.059 [2024-11-28 00:11:08.462707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:54.059 [2024-11-28 00:11:08.462713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.059 [2024-11-28 00:11:08.462782] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:54.059 [2024-11-28 00:11:08.462791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:54.059 [2024-11-28 00:11:08.462803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.059 [2024-11-28 00:11:08.462813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.059 [2024-11-28 00:11:08.462822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:54.059 [2024-11-28 00:11:08.462829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:54.059 [2024-11-28 00:11:08.462835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:54.059 [2024-11-28 00:11:08.462841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:54.059 [2024-11-28 00:11:08.462848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:54.059 [2024-11-28 00:11:08.462854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.059 [2024-11-28 00:11:08.462860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:54.059 [2024-11-28 00:11:08.462867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:54.059 [2024-11-28 00:11:08.462873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.059 [2024-11-28 00:11:08.462885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:54.059 [2024-11-28 00:11:08.462892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:54.059 [2024-11-28 00:11:08.462899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.059 [2024-11-28 00:11:08.462907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:54.059 [2024-11-28 00:11:08.462914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:54.059 [2024-11-28 00:11:08.462921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.059 [2024-11-28 00:11:08.462930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:54.059 [2024-11-28 00:11:08.462937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:54.059 [2024-11-28 00:11:08.462945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:54.059 [2024-11-28 00:11:08.462952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:54.060 [2024-11-28 00:11:08.462959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:54.060 [2024-11-28 00:11:08.462967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.060 [2024-11-28 00:11:08.462974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:54.060 [2024-11-28 00:11:08.462981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:54.060 [2024-11-28 00:11:08.462988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.060 [2024-11-28 00:11:08.462995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:54.060 [2024-11-28 00:11:08.463002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:54.060 [2024-11-28 00:11:08.463009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.060 [2024-11-28 00:11:08.463016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:54.060 [2024-11-28 00:11:08.463023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:54.060 [2024-11-28 00:11:08.463030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.060 [2024-11-28 00:11:08.463038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:54.060 [2024-11-28 00:11:08.463049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:54.060 [2024-11-28 00:11:08.463056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.060 [2024-11-28 00:11:08.463063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:54.060 [2024-11-28 00:11:08.463070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:54.060 [2024-11-28 00:11:08.463077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.060 [2024-11-28 00:11:08.463084] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:54.060 [2024-11-28 00:11:08.463095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:54.060 [2024-11-28 00:11:08.463103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.060 [2024-11-28 00:11:08.463112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.060 [2024-11-28 00:11:08.463120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:54.060 [2024-11-28 00:11:08.463128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:54.060 [2024-11-28 00:11:08.463135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:54.060 [2024-11-28 00:11:08.463143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:54.060 [2024-11-28 00:11:08.463150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:54.060 [2024-11-28 00:11:08.463158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:54.060 [2024-11-28 00:11:08.463166] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:54.060 [2024-11-28 00:11:08.463177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.060 [2024-11-28 00:11:08.463186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:54.060 [2024-11-28 00:11:08.463194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:54.060 [2024-11-28 00:11:08.463202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:54.060 [2024-11-28 00:11:08.463210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:54.060 [2024-11-28 00:11:08.463218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:54.060 [2024-11-28 00:11:08.463226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:54.060 [2024-11-28 00:11:08.463234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:54.060 [2024-11-28 00:11:08.463241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:54.060 [2024-11-28 00:11:08.463249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:54.060 [2024-11-28 00:11:08.463257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:54.060 [2024-11-28 00:11:08.463266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:54.060 [2024-11-28 00:11:08.463274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:54.060 [2024-11-28 00:11:08.463282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:54.060 [2024-11-28 00:11:08.463289] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:54.060 [2024-11-28 00:11:08.463296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.060 [2024-11-28 00:11:08.463309] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:54.060 [2024-11-28 00:11:08.463316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:54.060 [2024-11-28 00:11:08.463322] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:54.060 [2024-11-28 00:11:08.463329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:54.060 [2024-11-28 00:11:08.463336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.463343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:54.060 [2024-11-28 00:11:08.463350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:16:54.060 [2024-11-28 00:11:08.463359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.469093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.469131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.060 [2024-11-28 00:11:08.469141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.684 ms 00:16:54.060 [2024-11-28 00:11:08.469148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.469226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.469234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:54.060 [2024-11-28 00:11:08.469242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:54.060 [2024-11-28 00:11:08.469249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.485626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.485669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.060 [2024-11-28 00:11:08.485685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.334 ms 00:16:54.060 [2024-11-28 00:11:08.485695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.485744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.485757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.060 [2024-11-28 00:11:08.485769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.060 [2024-11-28 00:11:08.485785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.486161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.486181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.060 [2024-11-28 00:11:08.486203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:16:54.060 [2024-11-28 00:11:08.486213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.486409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.486425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.060 [2024-11-28 00:11:08.486437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:54.060 [2024-11-28 00:11:08.486448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.492354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.492406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.060 [2024-11-28 00:11:08.492418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.877 ms 00:16:54.060 [2024-11-28 00:11:08.492429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.494963] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:54.060 [2024-11-28 00:11:08.495004] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:54.060 [2024-11-28 00:11:08.495023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.495033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:54.060 [2024-11-28 00:11:08.495043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:16:54.060 [2024-11-28 00:11:08.495053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.509862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.509899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:54.060 [2024-11-28 00:11:08.509909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.763 ms 00:16:54.060 [2024-11-28 00:11:08.509916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.511734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.511855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:54.060 [2024-11-28 00:11:08.511869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:16:54.060 [2024-11-28 00:11:08.511877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.513271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.513301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:54.060 [2024-11-28 00:11:08.513311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:16:54.060 [2024-11-28 00:11:08.513322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.060 [2024-11-28 00:11:08.513517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.060 [2024-11-28 00:11:08.513528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:54.061 [2024-11-28 00:11:08.513536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:16:54.061 [2024-11-28 00:11:08.513546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.530598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.530756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:54.061 [2024-11-28 00:11:08.530773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.036 ms 00:16:54.061 [2024-11-28 00:11:08.530780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.538028] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:54.061 [2024-11-28 00:11:08.540293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.540323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:54.061 [2024-11-28 00:11:08.540334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.474 ms 00:16:54.061 [2024-11-28 00:11:08.540343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.540416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.540428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:54.061 [2024-11-28 00:11:08.540437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.061 [2024-11-28 00:11:08.540452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.540500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.540516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:54.061 [2024-11-28 00:11:08.540525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:54.061 [2024-11-28 00:11:08.540536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.541709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.541737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:54.061 [2024-11-28 00:11:08.541745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:16:54.061 [2024-11-28 00:11:08.541758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.541787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.541795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.061 [2024-11-28 00:11:08.541805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:54.061 [2024-11-28 00:11:08.541812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.541855] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:54.061 [2024-11-28 00:11:08.541865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.541872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:54.061 [2024-11-28 00:11:08.541880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:54.061 [2024-11-28 00:11:08.541889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.545297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.545446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.061 [2024-11-28 00:11:08.545464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:16:54.061 [2024-11-28 00:11:08.545479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.545542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.061 [2024-11-28 00:11:08.545552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.061 [2024-11-28 00:11:08.545563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:54.061 [2024-11-28 00:11:08.545570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.061 [2024-11-28 00:11:08.546440] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 93.844 ms, result 0 00:16:55.435  [2024-11-28T00:11:10.969Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-28T00:11:11.899Z] Copying: 96/1024 [MB] (47 MBps) [2024-11-28T00:11:12.832Z] Copying: 147/1024 [MB] (50 MBps) [2024-11-28T00:11:13.765Z] Copying: 195/1024 [MB] (48 MBps) [2024-11-28T00:11:15.138Z] Copying: 244/1024 [MB] (48 MBps) [2024-11-28T00:11:16.082Z] Copying: 290/1024 [MB] (46 MBps) [2024-11-28T00:11:17.017Z] Copying: 339/1024 [MB] (48 MBps) [2024-11-28T00:11:17.950Z] Copying: 390/1024 [MB] (50 MBps) [2024-11-28T00:11:18.884Z] Copying: 440/1024 [MB] (50 MBps) [2024-11-28T00:11:19.818Z] Copying: 490/1024 [MB] (50 MBps) [2024-11-28T00:11:20.750Z] Copying: 538/1024 [MB] (47 MBps) [2024-11-28T00:11:22.123Z] Copying: 560/1024 [MB] (22 MBps) [2024-11-28T00:11:23.057Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-28T00:11:23.991Z] Copying: 593/1024 [MB] (14 MBps) [2024-11-28T00:11:24.924Z] Copying: 617/1024 [MB] (24 MBps) [2024-11-28T00:11:25.858Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-28T00:11:26.806Z] Copying: 654/1024 [MB] (18 MBps) [2024-11-28T00:11:27.739Z] Copying: 670/1024 [MB] (16 MBps) [2024-11-28T00:11:29.112Z] Copying: 697/1024 [MB] (26 MBps) [2024-11-28T00:11:30.045Z] Copying: 726/1024 [MB] (29 MBps) [2024-11-28T00:11:30.978Z] Copying: 749/1024 [MB] (23 MBps) [2024-11-28T00:11:31.912Z] Copying: 770/1024 [MB] (21 MBps) [2024-11-28T00:11:32.845Z] Copying: 792/1024 [MB] (21 MBps) [2024-11-28T00:11:33.776Z] Copying: 814/1024 [MB] (22 MBps) [2024-11-28T00:11:35.149Z] Copying: 829/1024 [MB] (14 MBps) [2024-11-28T00:11:36.082Z] Copying: 843/1024 [MB] (14 MBps) [2024-11-28T00:11:37.015Z] Copying: 858/1024 [MB] (14 MBps) [2024-11-28T00:11:37.949Z] Copying: 872/1024 [MB] (14 MBps) [2024-11-28T00:11:38.884Z] Copying: 887/1024 [MB] (14 MBps) [2024-11-28T00:11:39.819Z] Copying: 902/1024 [MB] (15 MBps) [2024-11-28T00:11:40.753Z] Copying: 918/1024 [MB] (15 MBps) [2024-11-28T00:11:42.127Z] Copying: 933/1024 [MB] (15 MBps) [2024-11-28T00:11:43.060Z] Copying: 947/1024 [MB] (14 MBps) [2024-11-28T00:11:43.993Z] Copying: 962/1024 [MB] (14 MBps) [2024-11-28T00:11:44.928Z] Copying: 976/1024 [MB] (14 MBps) [2024-11-28T00:11:45.861Z] Copying: 991/1024 [MB] (14 MBps) [2024-11-28T00:11:46.794Z] Copying: 1006/1024 [MB] (14 MBps) [2024-11-28T00:11:47.052Z] Copying: 1021/1024 [MB] (14 MBps) [2024-11-28T00:11:47.310Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-28 00:11:47.284550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.708 [2024-11-28 00:11:47.284635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.708 [2024-11-28 00:11:47.284659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.708 [2024-11-28 00:11:47.284674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.708 [2024-11-28 00:11:47.284711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.708 [2024-11-28 00:11:47.285343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.708 [2024-11-28 00:11:47.285404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.708 [2024-11-28 00:11:47.285422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:17:32.708 [2024-11-28 00:11:47.285436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.708 [2024-11-28 00:11:47.287020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.708 [2024-11-28 00:11:47.287052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.708 [2024-11-28 00:11:47.287066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:17:32.708 [2024-11-28 00:11:47.287077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.708 [2024-11-28 00:11:47.292678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.708 [2024-11-28 00:11:47.292711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.708 [2024-11-28 00:11:47.292724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.581 ms 00:17:32.708 [2024-11-28 00:11:47.292735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.709 [2024-11-28 00:11:47.300672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.709 [2024-11-28 00:11:47.300697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:32.709 [2024-11-28 00:11:47.300706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.909 ms 00:17:32.709 [2024-11-28 00:11:47.300713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.709 [2024-11-28 00:11:47.302848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.709 [2024-11-28 00:11:47.302879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.709 [2024-11-28 00:11:47.302887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:17:32.709 [2024-11-28 00:11:47.302894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.709 [2024-11-28 00:11:47.306426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.709 [2024-11-28 00:11:47.306463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.709 [2024-11-28 00:11:47.306471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:17:32.709 [2024-11-28 00:11:47.306478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.709 [2024-11-28 00:11:47.306586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.709 [2024-11-28 00:11:47.306595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.709 [2024-11-28 00:11:47.306603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:32.709 [2024-11-28 00:11:47.306615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.709 [2024-11-28 00:11:47.309252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.709 [2024-11-28 00:11:47.309281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:32.709 [2024-11-28 00:11:47.309290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:17:32.709 [2024-11-28 00:11:47.309296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.969 [2024-11-28 00:11:47.311562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.969 [2024-11-28 00:11:47.311589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:32.969 [2024-11-28 00:11:47.311598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:17:32.969 [2024-11-28 00:11:47.311604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.969 [2024-11-28 00:11:47.313185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.969 [2024-11-28 00:11:47.313311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.969 [2024-11-28 00:11:47.313326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:17:32.969 [2024-11-28 00:11:47.313332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.969 [2024-11-28 00:11:47.315052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.969 [2024-11-28 00:11:47.315083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.969 [2024-11-28 00:11:47.315091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:17:32.969 [2024-11-28 00:11:47.315097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.969 [2024-11-28 00:11:47.315122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.969 [2024-11-28 00:11:47.315136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.969 [2024-11-28 00:11:47.315639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.970 [2024-11-28 00:11:47.315907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.970 [2024-11-28 00:11:47.315919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:17:32.970 [2024-11-28 00:11:47.315929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.970 [2024-11-28 00:11:47.315937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.970 [2024-11-28 00:11:47.315944] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.970 [2024-11-28 00:11:47.315957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.970 [2024-11-28 00:11:47.315964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.970 [2024-11-28 00:11:47.315974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.970 [2024-11-28 00:11:47.315984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.970 [2024-11-28 00:11:47.315990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.970 [2024-11-28 00:11:47.315996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.970 [2024-11-28 00:11:47.316002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.970 [2024-11-28 00:11:47.316009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.970 [2024-11-28 00:11:47.316018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:17:32.970 [2024-11-28 00:11:47.316028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.317374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.970 [2024-11-28 00:11:47.317398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.970 [2024-11-28 00:11:47.317406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:17:32.970 [2024-11-28 00:11:47.317414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.317466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.970 [2024-11-28 00:11:47.317473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.970 [2024-11-28 00:11:47.317483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:32.970 [2024-11-28 00:11:47.317490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.322304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.322335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.970 [2024-11-28 00:11:47.322344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.322351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.322408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.322416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.970 [2024-11-28 00:11:47.322431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.322438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.322494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.322503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.970 [2024-11-28 00:11:47.322514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.322521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.322536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.322543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.970 [2024-11-28 00:11:47.322553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.322565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.330488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.330520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.970 [2024-11-28 00:11:47.330529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.330536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.970 [2024-11-28 00:11:47.334068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.970 [2024-11-28 00:11:47.334121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.970 [2024-11-28 00:11:47.334180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.970 [2024-11-28 00:11:47.334263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.970 [2024-11-28 00:11:47.334310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.970 [2024-11-28 00:11:47.334387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.970 [2024-11-28 00:11:47.334395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.970 [2024-11-28 00:11:47.334431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.970 [2024-11-28 00:11:47.334439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.971 [2024-11-28 00:11:47.334451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.971 [2024-11-28 00:11:47.334458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.971 [2024-11-28 00:11:47.334568] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.009 ms, result 0 00:17:32.971 00:17:32.971 00:17:32.971 00:11:47 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:35.537 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:17:35.537 00:11:49 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:17:35.537 [2024-11-28 00:11:49.708929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.537 [2024-11-28 00:11:49.709038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84852 ] 00:17:35.537 [2024-11-28 00:11:49.857075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.537 [2024-11-28 00:11:49.887338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.537 [2024-11-28 00:11:49.970425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.537 [2024-11-28 00:11:49.970493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.537 [2024-11-28 00:11:50.118567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.118607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.537 [2024-11-28 00:11:50.118619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.537 [2024-11-28 00:11:50.118627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.118676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.118686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.537 [2024-11-28 00:11:50.118697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:35.537 [2024-11-28 00:11:50.118709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.118733] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.537 [2024-11-28 00:11:50.118964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.537 [2024-11-28 00:11:50.118979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.118990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.537 [2024-11-28 00:11:50.118998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:35.537 [2024-11-28 00:11:50.119005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.120334] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:35.537 [2024-11-28 00:11:50.123045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.123169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:35.537 [2024-11-28 00:11:50.123191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:17:35.537 [2024-11-28 00:11:50.123199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.123244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.123254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:35.537 [2024-11-28 00:11:50.123262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:35.537 [2024-11-28 00:11:50.123268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.127960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.127991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.537 [2024-11-28 00:11:50.128001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:17:35.537 [2024-11-28 00:11:50.128007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.128075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.128086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.537 [2024-11-28 00:11:50.128094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:35.537 [2024-11-28 00:11:50.128101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.128139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.128147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.537 [2024-11-28 00:11:50.128157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:35.537 [2024-11-28 00:11:50.128164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.128187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.537 [2024-11-28 00:11:50.129495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.129597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.537 [2024-11-28 00:11:50.129610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:17:35.537 [2024-11-28 00:11:50.129616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.129647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.129655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.537 [2024-11-28 00:11:50.129665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:35.537 [2024-11-28 00:11:50.129675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.129693] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:35.537 [2024-11-28 00:11:50.129711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:35.537 [2024-11-28 00:11:50.129745] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:35.537 [2024-11-28 00:11:50.129762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:35.537 [2024-11-28 00:11:50.129838] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:35.537 [2024-11-28 00:11:50.129849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.537 [2024-11-28 00:11:50.129864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:35.537 [2024-11-28 00:11:50.129873] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.537 [2024-11-28 00:11:50.129883] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.537 [2024-11-28 00:11:50.129891] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:35.537 [2024-11-28 00:11:50.129898] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.537 [2024-11-28 00:11:50.129905] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:35.537 [2024-11-28 00:11:50.129915] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:35.537 [2024-11-28 00:11:50.129926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.129933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.537 [2024-11-28 00:11:50.129941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:35.537 [2024-11-28 00:11:50.129949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.130007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.537 [2024-11-28 00:11:50.130014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.537 [2024-11-28 00:11:50.130021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:35.537 [2024-11-28 00:11:50.130028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.537 [2024-11-28 00:11:50.130098] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.537 [2024-11-28 00:11:50.130106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.537 [2024-11-28 00:11:50.130114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.537 [2024-11-28 00:11:50.130139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.537 [2024-11-28 00:11:50.130160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.537 [2024-11-28 00:11:50.130173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.537 [2024-11-28 00:11:50.130181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:35.537 [2024-11-28 00:11:50.130188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.537 [2024-11-28 00:11:50.130201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.537 [2024-11-28 00:11:50.130208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:35.537 [2024-11-28 00:11:50.130216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.537 [2024-11-28 00:11:50.130230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:35.537 [2024-11-28 00:11:50.130238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:35.537 [2024-11-28 00:11:50.130254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:35.537 [2024-11-28 00:11:50.130261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.537 [2024-11-28 00:11:50.130276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.537 [2024-11-28 00:11:50.130298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.537 [2024-11-28 00:11:50.130319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.537 [2024-11-28 00:11:50.130341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.537 [2024-11-28 00:11:50.130384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.537 [2024-11-28 00:11:50.130400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.537 [2024-11-28 00:11:50.130407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:35.537 [2024-11-28 00:11:50.130414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.537 [2024-11-28 00:11:50.130421] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.537 [2024-11-28 00:11:50.130429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.537 [2024-11-28 00:11:50.130437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.537 [2024-11-28 00:11:50.130453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.537 [2024-11-28 00:11:50.130460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.537 [2024-11-28 00:11:50.130468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.537 [2024-11-28 00:11:50.130475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.537 [2024-11-28 00:11:50.130482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.537 [2024-11-28 00:11:50.130490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.537 [2024-11-28 00:11:50.130498] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.537 [2024-11-28 00:11:50.130513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.537 [2024-11-28 00:11:50.130522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:35.538 [2024-11-28 00:11:50.130530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:35.538 [2024-11-28 00:11:50.130538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:35.538 [2024-11-28 00:11:50.130546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:35.538 [2024-11-28 00:11:50.130554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:35.538 [2024-11-28 00:11:50.130562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:35.538 [2024-11-28 00:11:50.130570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:35.538 [2024-11-28 00:11:50.130578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:35.538 [2024-11-28 00:11:50.130586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:35.538 [2024-11-28 00:11:50.130594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:35.538 [2024-11-28 00:11:50.130602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:35.538 [2024-11-28 00:11:50.130610] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:35.538 [2024-11-28 00:11:50.130618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:35.538 [2024-11-28 00:11:50.130625] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.538 [2024-11-28 00:11:50.130636] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.538 [2024-11-28 00:11:50.130645] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.538 [2024-11-28 00:11:50.130652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.538 [2024-11-28 00:11:50.130659] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.538 [2024-11-28 00:11:50.130665] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.538 [2024-11-28 00:11:50.130673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.538 [2024-11-28 00:11:50.130680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.538 [2024-11-28 00:11:50.130687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:17:35.538 [2024-11-28 00:11:50.130700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.136505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.136614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.800 [2024-11-28 00:11:50.136628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.771 ms 00:17:35.800 [2024-11-28 00:11:50.136635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.136715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.136722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:35.800 [2024-11-28 00:11:50.136730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:35.800 [2024-11-28 00:11:50.136742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.156129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.156342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.800 [2024-11-28 00:11:50.156397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.345 ms 00:17:35.800 [2024-11-28 00:11:50.156412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.156475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.156491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.800 [2024-11-28 00:11:50.156510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.800 [2024-11-28 00:11:50.156526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.156949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.156987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.800 [2024-11-28 00:11:50.157012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:17:35.800 [2024-11-28 00:11:50.157026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.157237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.157255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.800 [2024-11-28 00:11:50.157268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:35.800 [2024-11-28 00:11:50.157280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.163525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.163558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.800 [2024-11-28 00:11:50.163567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:17:35.800 [2024-11-28 00:11:50.163574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.166265] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:35.800 [2024-11-28 00:11:50.166300] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:35.800 [2024-11-28 00:11:50.166309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.166316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:35.800 [2024-11-28 00:11:50.166324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:17:35.800 [2024-11-28 00:11:50.166331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.180765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.180802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:35.800 [2024-11-28 00:11:50.180812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.377 ms 00:17:35.800 [2024-11-28 00:11:50.180819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.182823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.182851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:35.800 [2024-11-28 00:11:50.182860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:17:35.800 [2024-11-28 00:11:50.182866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.184565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.184590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:35.800 [2024-11-28 00:11:50.184598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:17:35.800 [2024-11-28 00:11:50.184608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.184786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.184796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:35.800 [2024-11-28 00:11:50.184804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:35.800 [2024-11-28 00:11:50.184813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.202079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.202117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:35.800 [2024-11-28 00:11:50.202127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.247 ms 00:17:35.800 [2024-11-28 00:11:50.202135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.209429] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:35.800 [2024-11-28 00:11:50.211498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.211525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.800 [2024-11-28 00:11:50.211535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.322 ms 00:17:35.800 [2024-11-28 00:11:50.211543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.211596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.211612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:35.800 [2024-11-28 00:11:50.211621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.800 [2024-11-28 00:11:50.211629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.211676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.211695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.800 [2024-11-28 00:11:50.211704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:35.800 [2024-11-28 00:11:50.211712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.800 [2024-11-28 00:11:50.212856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.800 [2024-11-28 00:11:50.212881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:35.800 [2024-11-28 00:11:50.212889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:17:35.801 [2024-11-28 00:11:50.212896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.801 [2024-11-28 00:11:50.212925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.801 [2024-11-28 00:11:50.212933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.801 [2024-11-28 00:11:50.212943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.801 [2024-11-28 00:11:50.212950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.801 [2024-11-28 00:11:50.212991] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:35.801 [2024-11-28 00:11:50.213000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.801 [2024-11-28 00:11:50.213007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:35.801 [2024-11-28 00:11:50.213014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:35.801 [2024-11-28 00:11:50.213023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.801 [2024-11-28 00:11:50.217087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.801 [2024-11-28 00:11:50.217140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.801 [2024-11-28 00:11:50.217152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.046 ms 00:17:35.801 [2024-11-28 00:11:50.217165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.801 [2024-11-28 00:11:50.217228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.801 [2024-11-28 00:11:50.217237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.801 [2024-11-28 00:11:50.217248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:35.801 [2024-11-28 00:11:50.217258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.801 [2024-11-28 00:11:50.218388] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.433 ms, result 0 00:17:36.740  [2024-11-28T00:11:52.290Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-28T00:11:53.661Z] Copying: 28/1024 [MB] (14 MBps) [2024-11-28T00:11:54.595Z] Copying: 44/1024 [MB] (15 MBps) [2024-11-28T00:11:55.529Z] Copying: 69/1024 [MB] (25 MBps) [2024-11-28T00:11:56.463Z] Copying: 97/1024 [MB] (27 MBps) [2024-11-28T00:11:57.397Z] Copying: 111/1024 [MB] (14 MBps) [2024-11-28T00:11:58.330Z] Copying: 138/1024 [MB] (27 MBps) [2024-11-28T00:11:59.264Z] Copying: 169/1024 [MB] (30 MBps) [2024-11-28T00:12:00.638Z] Copying: 189/1024 [MB] (20 MBps) [2024-11-28T00:12:01.571Z] Copying: 218/1024 [MB] (28 MBps) [2024-11-28T00:12:02.505Z] Copying: 253/1024 [MB] (35 MBps) [2024-11-28T00:12:03.439Z] Copying: 289/1024 [MB] (35 MBps) [2024-11-28T00:12:04.372Z] Copying: 319/1024 [MB] (30 MBps) [2024-11-28T00:12:05.308Z] Copying: 343/1024 [MB] (23 MBps) [2024-11-28T00:12:06.240Z] Copying: 375/1024 [MB] (31 MBps) [2024-11-28T00:12:07.611Z] Copying: 408/1024 [MB] (33 MBps) [2024-11-28T00:12:08.545Z] Copying: 439/1024 [MB] (30 MBps) [2024-11-28T00:12:09.478Z] Copying: 475/1024 [MB] (35 MBps) [2024-11-28T00:12:10.410Z] Copying: 506/1024 [MB] (31 MBps) [2024-11-28T00:12:11.344Z] Copying: 538/1024 [MB] (32 MBps) [2024-11-28T00:12:12.275Z] Copying: 557/1024 [MB] (19 MBps) [2024-11-28T00:12:13.647Z] Copying: 593/1024 [MB] (35 MBps) [2024-11-28T00:12:14.579Z] Copying: 622/1024 [MB] (28 MBps) [2024-11-28T00:12:15.511Z] Copying: 647/1024 [MB] (25 MBps) [2024-11-28T00:12:16.444Z] Copying: 683/1024 [MB] (35 MBps) [2024-11-28T00:12:17.488Z] Copying: 701/1024 [MB] (18 MBps) [2024-11-28T00:12:18.436Z] Copying: 722/1024 [MB] (21 MBps) [2024-11-28T00:12:19.369Z] Copying: 769/1024 [MB] (46 MBps) [2024-11-28T00:12:20.302Z] Copying: 795/1024 [MB] (25 MBps) [2024-11-28T00:12:21.234Z] Copying: 823/1024 [MB] (28 MBps) [2024-11-28T00:12:22.605Z] Copying: 850/1024 [MB] (26 MBps) [2024-11-28T00:12:23.538Z] Copying: 881/1024 [MB] (30 MBps) [2024-11-28T00:12:24.471Z] Copying: 911/1024 [MB] (30 MBps) [2024-11-28T00:12:25.415Z] Copying: 945/1024 [MB] (33 MBps) [2024-11-28T00:12:26.353Z] Copying: 975/1024 [MB] (30 MBps) [2024-11-28T00:12:27.286Z] Copying: 996/1024 [MB] (21 MBps) [2024-11-28T00:12:28.660Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-28T00:12:28.918Z] Copying: 1048084/1048576 [kB] (9620 kBps) [2024-11-28T00:12:28.918Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-28 00:12:28.666178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.666237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:14.316 [2024-11-28 00:12:28.666252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.316 [2024-11-28 00:12:28.666261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.668582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.316 [2024-11-28 00:12:28.670496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.670535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:14.316 [2024-11-28 00:12:28.670544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:18:14.316 [2024-11-28 00:12:28.670551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.682437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.682481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:14.316 [2024-11-28 00:12:28.682492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.578 ms 00:18:14.316 [2024-11-28 00:12:28.682500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.705559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.705683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:14.316 [2024-11-28 00:12:28.705701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.043 ms 00:18:14.316 [2024-11-28 00:12:28.705709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.711776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.711802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:14.316 [2024-11-28 00:12:28.711816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.041 ms 00:18:14.316 [2024-11-28 00:12:28.711823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.714023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.714053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:14.316 [2024-11-28 00:12:28.714061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:18:14.316 [2024-11-28 00:12:28.714068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.316 [2024-11-28 00:12:28.717715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.316 [2024-11-28 00:12:28.717746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:14.316 [2024-11-28 00:12:28.717755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:18:14.316 [2024-11-28 00:12:28.717762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.927014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.575 [2024-11-28 00:12:28.927052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:14.575 [2024-11-28 00:12:28.927062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 209.222 ms 00:18:14.575 [2024-11-28 00:12:28.927070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.928978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.575 [2024-11-28 00:12:28.929095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:14.575 [2024-11-28 00:12:28.929110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:18:14.575 [2024-11-28 00:12:28.929117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.930504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.575 [2024-11-28 00:12:28.930533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:14.575 [2024-11-28 00:12:28.930541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:18:14.575 [2024-11-28 00:12:28.930548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.931400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.575 [2024-11-28 00:12:28.931431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:14.575 [2024-11-28 00:12:28.931439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:18:14.575 [2024-11-28 00:12:28.931446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.932543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.575 [2024-11-28 00:12:28.932583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:14.575 [2024-11-28 00:12:28.932593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:18:14.575 [2024-11-28 00:12:28.932600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.575 [2024-11-28 00:12:28.932633] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:14.575 [2024-11-28 00:12:28.932646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100096 / 261120 wr_cnt: 1 state: open 00:18:14.575 [2024-11-28 00:12:28.932655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:14.575 [2024-11-28 00:12:28.932913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.932998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:14.576 [2024-11-28 00:12:28.933399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:14.576 [2024-11-28 00:12:28.933407] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:18:14.576 [2024-11-28 00:12:28.933414] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100096 00:18:14.576 [2024-11-28 00:12:28.933422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101056 00:18:14.576 [2024-11-28 00:12:28.933428] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100096 00:18:14.576 [2024-11-28 00:12:28.933436] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:18:14.576 [2024-11-28 00:12:28.933447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:14.576 [2024-11-28 00:12:28.933455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:14.576 [2024-11-28 00:12:28.933462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:14.576 [2024-11-28 00:12:28.933468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:14.576 [2024-11-28 00:12:28.933474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:14.576 [2024-11-28 00:12:28.933490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.577 [2024-11-28 00:12:28.933498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:14.577 [2024-11-28 00:12:28.933509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:18:14.577 [2024-11-28 00:12:28.933516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.934844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.577 [2024-11-28 00:12:28.934864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:14.577 [2024-11-28 00:12:28.934877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:18:14.577 [2024-11-28 00:12:28.934884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.934941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.577 [2024-11-28 00:12:28.934949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:14.577 [2024-11-28 00:12:28.934956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:14.577 [2024-11-28 00:12:28.934963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.939909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.940014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.577 [2024-11-28 00:12:28.940063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.940084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.940142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.940161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.577 [2024-11-28 00:12:28.940186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.940204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.940263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.940390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.577 [2024-11-28 00:12:28.940411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.940429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.940454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.940477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.577 [2024-11-28 00:12:28.940496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.940558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.948540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.948667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.577 [2024-11-28 00:12:28.948712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.948733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.577 [2024-11-28 00:12:28.952332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.952353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.577 [2024-11-28 00:12:28.952486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.952507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.577 [2024-11-28 00:12:28.952627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.952645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.577 [2024-11-28 00:12:28.952788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.952809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:14.577 [2024-11-28 00:12:28.952889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.952906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.952957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.952984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.577 [2024-11-28 00:12:28.953003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.953020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.953126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.577 [2024-11-28 00:12:28.953171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.577 [2024-11-28 00:12:28.953190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.577 [2024-11-28 00:12:28.953214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.577 [2024-11-28 00:12:28.953380] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 287.794 ms, result 0 00:18:15.140 00:18:15.140 00:18:15.140 00:12:29 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:18:15.140 [2024-11-28 00:12:29.626010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:15.140 [2024-11-28 00:12:29.626117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85274 ] 00:18:15.396 [2024-11-28 00:12:29.773700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.396 [2024-11-28 00:12:29.804018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.396 [2024-11-28 00:12:29.888165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.396 [2024-11-28 00:12:29.888224] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.654 [2024-11-28 00:12:30.036976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.654 [2024-11-28 00:12:30.037022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:15.654 [2024-11-28 00:12:30.037037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:15.654 [2024-11-28 00:12:30.037045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.654 [2024-11-28 00:12:30.037089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.654 [2024-11-28 00:12:30.037099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.654 [2024-11-28 00:12:30.037108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:15.654 [2024-11-28 00:12:30.037117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.654 [2024-11-28 00:12:30.037144] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:15.654 [2024-11-28 00:12:30.037405] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:15.655 [2024-11-28 00:12:30.037420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.037430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.655 [2024-11-28 00:12:30.037438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:15.655 [2024-11-28 00:12:30.037444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.038478] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:15.655 [2024-11-28 00:12:30.040603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.040634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:15.655 [2024-11-28 00:12:30.040648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:18:15.655 [2024-11-28 00:12:30.040655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.040703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.040717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:15.655 [2024-11-28 00:12:30.040725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:15.655 [2024-11-28 00:12:30.040731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.045650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.045776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.655 [2024-11-28 00:12:30.045791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:18:15.655 [2024-11-28 00:12:30.045798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.045877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.045886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.655 [2024-11-28 00:12:30.045895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:15.655 [2024-11-28 00:12:30.045903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.045949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.045957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.655 [2024-11-28 00:12:30.045967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:15.655 [2024-11-28 00:12:30.045974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.045997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.655 [2024-11-28 00:12:30.047287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.047307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.655 [2024-11-28 00:12:30.047319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:18:15.655 [2024-11-28 00:12:30.047326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.047355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.047373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.655 [2024-11-28 00:12:30.047383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:15.655 [2024-11-28 00:12:30.047390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.047409] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:15.655 [2024-11-28 00:12:30.047426] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:15.655 [2024-11-28 00:12:30.047456] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:15.655 [2024-11-28 00:12:30.047471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:15.655 [2024-11-28 00:12:30.047543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:15.655 [2024-11-28 00:12:30.047554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.655 [2024-11-28 00:12:30.047567] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:15.655 [2024-11-28 00:12:30.047578] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047586] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047593] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:15.655 [2024-11-28 00:12:30.047600] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.655 [2024-11-28 00:12:30.047607] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:15.655 [2024-11-28 00:12:30.047617] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:15.655 [2024-11-28 00:12:30.047624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.047631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.655 [2024-11-28 00:12:30.047638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:15.655 [2024-11-28 00:12:30.047646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.047705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.655 [2024-11-28 00:12:30.047717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.655 [2024-11-28 00:12:30.047724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:15.655 [2024-11-28 00:12:30.047731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.655 [2024-11-28 00:12:30.047801] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.655 [2024-11-28 00:12:30.047812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.655 [2024-11-28 00:12:30.047820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.655 [2024-11-28 00:12:30.047844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.655 [2024-11-28 00:12:30.047866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.655 [2024-11-28 00:12:30.047878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.655 [2024-11-28 00:12:30.047885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:15.655 [2024-11-28 00:12:30.047891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.655 [2024-11-28 00:12:30.047902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.655 [2024-11-28 00:12:30.047908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:15.655 [2024-11-28 00:12:30.047914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.655 [2024-11-28 00:12:30.047930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:15.655 [2024-11-28 00:12:30.047936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:15.655 [2024-11-28 00:12:30.047949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:15.655 [2024-11-28 00:12:30.047956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.655 [2024-11-28 00:12:30.047969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:15.655 [2024-11-28 00:12:30.047981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.655 [2024-11-28 00:12:30.047988] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:15.655 [2024-11-28 00:12:30.047994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:15.655 [2024-11-28 00:12:30.048000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.655 [2024-11-28 00:12:30.048007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:15.655 [2024-11-28 00:12:30.048013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:15.655 [2024-11-28 00:12:30.048019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.655 [2024-11-28 00:12:30.048025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:15.655 [2024-11-28 00:12:30.048033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:15.655 [2024-11-28 00:12:30.048040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.655 [2024-11-28 00:12:30.048046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:15.655 [2024-11-28 00:12:30.048052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.655 [2024-11-28 00:12:30.048059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.655 [2024-11-28 00:12:30.048066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:15.655 [2024-11-28 00:12:30.048072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.655 [2024-11-28 00:12:30.048078] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.655 [2024-11-28 00:12:30.048085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.655 [2024-11-28 00:12:30.048094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.655 [2024-11-28 00:12:30.048101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.655 [2024-11-28 00:12:30.048108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.655 [2024-11-28 00:12:30.048115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.655 [2024-11-28 00:12:30.048121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.655 [2024-11-28 00:12:30.048128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.655 [2024-11-28 00:12:30.048134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.655 [2024-11-28 00:12:30.048142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.656 [2024-11-28 00:12:30.048149] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.656 [2024-11-28 00:12:30.048161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.656 [2024-11-28 00:12:30.048169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:15.656 [2024-11-28 00:12:30.048176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:15.656 [2024-11-28 00:12:30.048183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:15.656 [2024-11-28 00:12:30.048190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:15.656 [2024-11-28 00:12:30.048196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:15.656 [2024-11-28 00:12:30.048203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:15.656 [2024-11-28 00:12:30.048209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:15.656 [2024-11-28 00:12:30.048216] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:15.656 [2024-11-28 00:12:30.048223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:15.656 [2024-11-28 00:12:30.048230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:15.656 [2024-11-28 00:12:30.048236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:15.656 [2024-11-28 00:12:30.048244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:15.656 [2024-11-28 00:12:30.048251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:15.656 [2024-11-28 00:12:30.048260] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.656 [2024-11-28 00:12:30.048271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.656 [2024-11-28 00:12:30.048282] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.656 [2024-11-28 00:12:30.048289] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.656 [2024-11-28 00:12:30.048296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.656 [2024-11-28 00:12:30.048303] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.656 [2024-11-28 00:12:30.048311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.048318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.656 [2024-11-28 00:12:30.048325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:15.656 [2024-11-28 00:12:30.048334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.054359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.054482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.656 [2024-11-28 00:12:30.054532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:18:15.656 [2024-11-28 00:12:30.054555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.054650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.054671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:15.656 [2024-11-28 00:12:30.054689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:15.656 [2024-11-28 00:12:30.054708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.070948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.071114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.656 [2024-11-28 00:12:30.071189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:18:15.656 [2024-11-28 00:12:30.071220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.071298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.071349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.656 [2024-11-28 00:12:30.071403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.656 [2024-11-28 00:12:30.071438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.071834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.071893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.656 [2024-11-28 00:12:30.071923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:15.656 [2024-11-28 00:12:30.072025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.072204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.072244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.656 [2024-11-28 00:12:30.072323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:18:15.656 [2024-11-28 00:12:30.072353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.078320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.078435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.656 [2024-11-28 00:12:30.078484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.904 ms 00:18:15.656 [2024-11-28 00:12:30.078506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.081206] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:18:15.656 [2024-11-28 00:12:30.081325] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:15.656 [2024-11-28 00:12:30.081400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.081420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:15.656 [2024-11-28 00:12:30.081439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:18:15.656 [2024-11-28 00:12:30.081456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.098509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.098624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:15.656 [2024-11-28 00:12:30.098681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:18:15.656 [2024-11-28 00:12:30.098702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.101231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.101350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:15.656 [2024-11-28 00:12:30.101421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:18:15.656 [2024-11-28 00:12:30.101445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.103464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.103583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:15.656 [2024-11-28 00:12:30.103641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:18:15.656 [2024-11-28 00:12:30.103664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.104131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.104266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:15.656 [2024-11-28 00:12:30.104634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:18:15.656 [2024-11-28 00:12:30.104662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.121739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.121871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:15.656 [2024-11-28 00:12:30.121887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.048 ms 00:18:15.656 [2024-11-28 00:12:30.121895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.129094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:15.656 [2024-11-28 00:12:30.131305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.131333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:15.656 [2024-11-28 00:12:30.131343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.378 ms 00:18:15.656 [2024-11-28 00:12:30.131359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.131427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.131438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:15.656 [2024-11-28 00:12:30.131448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:15.656 [2024-11-28 00:12:30.131456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.132452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.132482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:15.656 [2024-11-28 00:12:30.132492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:18:15.656 [2024-11-28 00:12:30.132498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.133662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.133687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:15.656 [2024-11-28 00:12:30.133695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:18:15.656 [2024-11-28 00:12:30.133702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.133741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.133749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:15.656 [2024-11-28 00:12:30.133763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.656 [2024-11-28 00:12:30.133770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.656 [2024-11-28 00:12:30.133802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:15.656 [2024-11-28 00:12:30.133814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.656 [2024-11-28 00:12:30.133821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:15.656 [2024-11-28 00:12:30.133829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:15.657 [2024-11-28 00:12:30.133835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.657 [2024-11-28 00:12:30.137616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.657 [2024-11-28 00:12:30.137647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:15.657 [2024-11-28 00:12:30.137660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.764 ms 00:18:15.657 [2024-11-28 00:12:30.137673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.657 [2024-11-28 00:12:30.137735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.657 [2024-11-28 00:12:30.137743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:15.657 [2024-11-28 00:12:30.137757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:15.657 [2024-11-28 00:12:30.137764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.657 [2024-11-28 00:12:30.143385] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.356 ms, result 0 00:18:17.027  [2024-11-28T00:12:32.562Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-28T00:12:33.495Z] Copying: 37/1024 [MB] (23 MBps) [2024-11-28T00:12:34.427Z] Copying: 56/1024 [MB] (18 MBps) [2024-11-28T00:12:35.358Z] Copying: 76/1024 [MB] (20 MBps) [2024-11-28T00:12:36.729Z] Copying: 95/1024 [MB] (18 MBps) [2024-11-28T00:12:37.662Z] Copying: 121/1024 [MB] (25 MBps) [2024-11-28T00:12:38.595Z] Copying: 141/1024 [MB] (20 MBps) [2024-11-28T00:12:39.529Z] Copying: 171/1024 [MB] (29 MBps) [2024-11-28T00:12:40.463Z] Copying: 201/1024 [MB] (29 MBps) [2024-11-28T00:12:41.395Z] Copying: 217/1024 [MB] (15 MBps) [2024-11-28T00:12:42.403Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-28T00:12:43.344Z] Copying: 247/1024 [MB] (15 MBps) [2024-11-28T00:12:44.718Z] Copying: 267/1024 [MB] (19 MBps) [2024-11-28T00:12:45.651Z] Copying: 311/1024 [MB] (44 MBps) [2024-11-28T00:12:46.585Z] Copying: 357/1024 [MB] (45 MBps) [2024-11-28T00:12:47.519Z] Copying: 405/1024 [MB] (47 MBps) [2024-11-28T00:12:48.452Z] Copying: 452/1024 [MB] (47 MBps) [2024-11-28T00:12:49.386Z] Copying: 501/1024 [MB] (49 MBps) [2024-11-28T00:12:50.322Z] Copying: 548/1024 [MB] (47 MBps) [2024-11-28T00:12:51.694Z] Copying: 597/1024 [MB] (48 MBps) [2024-11-28T00:12:52.630Z] Copying: 643/1024 [MB] (46 MBps) [2024-11-28T00:12:53.564Z] Copying: 689/1024 [MB] (46 MBps) [2024-11-28T00:12:54.498Z] Copying: 726/1024 [MB] (36 MBps) [2024-11-28T00:12:55.433Z] Copying: 769/1024 [MB] (43 MBps) [2024-11-28T00:12:56.368Z] Copying: 815/1024 [MB] (45 MBps) [2024-11-28T00:12:57.743Z] Copying: 860/1024 [MB] (45 MBps) [2024-11-28T00:12:58.676Z] Copying: 907/1024 [MB] (46 MBps) [2024-11-28T00:12:59.611Z] Copying: 956/1024 [MB] (49 MBps) [2024-11-28T00:12:59.869Z] Copying: 1004/1024 [MB] (47 MBps) [2024-11-28T00:13:00.435Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-28 00:13:00.222529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.833 [2024-11-28 00:13:00.222813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.833 [2024-11-28 00:13:00.223142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:45.833 [2024-11-28 00:13:00.223195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.833 [2024-11-28 00:13:00.223260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.834 [2024-11-28 00:13:00.223811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.223929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.834 [2024-11-28 00:13:00.223988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:18:45.834 [2024-11-28 00:13:00.224009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.224244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.224270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.834 [2024-11-28 00:13:00.224289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:18:45.834 [2024-11-28 00:13:00.224372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.229451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.229553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.834 [2024-11-28 00:13:00.229606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:18:45.834 [2024-11-28 00:13:00.229627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.235765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.235858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:45.834 [2024-11-28 00:13:00.235922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.098 ms 00:18:45.834 [2024-11-28 00:13:00.235932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.237056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.237082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.834 [2024-11-28 00:13:00.237090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:18:45.834 [2024-11-28 00:13:00.237097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.240253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.240350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.834 [2024-11-28 00:13:00.240412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:18:45.834 [2024-11-28 00:13:00.240435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.294236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.294356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.834 [2024-11-28 00:13:00.294424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.681 ms 00:18:45.834 [2024-11-28 00:13:00.294446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.296022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.296111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:45.834 [2024-11-28 00:13:00.296187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:18:45.834 [2024-11-28 00:13:00.296208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.297341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.297451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:45.834 [2024-11-28 00:13:00.297499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:18:45.834 [2024-11-28 00:13:00.297519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.298381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.298477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.834 [2024-11-28 00:13:00.298524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:18:45.834 [2024-11-28 00:13:00.298543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.299306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.834 [2024-11-28 00:13:00.299402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.834 [2024-11-28 00:13:00.299452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:18:45.834 [2024-11-28 00:13:00.299472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.834 [2024-11-28 00:13:00.299517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.834 [2024-11-28 00:13:00.299608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133120 / 261120 wr_cnt: 1 state: open 00:18:45.834 [2024-11-28 00:13:00.299641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.299997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.300994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.834 [2024-11-28 00:13:00.301546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.301998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.835 [2024-11-28 00:13:00.302165] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.835 [2024-11-28 00:13:00.302178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7af7b6fd-19b0-40f4-9107-a95a2a5ab5cc 00:18:45.835 [2024-11-28 00:13:00.302186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133120 00:18:45.835 [2024-11-28 00:13:00.302196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 33984 00:18:45.835 [2024-11-28 00:13:00.302202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 33024 00:18:45.835 [2024-11-28 00:13:00.302213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0291 00:18:45.835 [2024-11-28 00:13:00.302226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.835 [2024-11-28 00:13:00.302233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.835 [2024-11-28 00:13:00.302240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.835 [2024-11-28 00:13:00.302246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.835 [2024-11-28 00:13:00.302252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.835 [2024-11-28 00:13:00.302259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.835 [2024-11-28 00:13:00.302267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.835 [2024-11-28 00:13:00.302274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:18:45.835 [2024-11-28 00:13:00.302281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.303735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.835 [2024-11-28 00:13:00.303819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.835 [2024-11-28 00:13:00.303871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:18:45.835 [2024-11-28 00:13:00.303896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.303992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.835 [2024-11-28 00:13:00.304024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.835 [2024-11-28 00:13:00.304078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:45.835 [2024-11-28 00:13:00.304098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.308904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.308939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.835 [2024-11-28 00:13:00.308953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.308960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.309005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.309013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.835 [2024-11-28 00:13:00.309020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.309027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.309084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.309097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.835 [2024-11-28 00:13:00.309107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.309114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.309128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.309136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.835 [2024-11-28 00:13:00.309153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.309160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.317029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.317067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.835 [2024-11-28 00:13:00.317077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.317084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.320533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.320562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.835 [2024-11-28 00:13:00.320571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.320578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.835 [2024-11-28 00:13:00.320608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.835 [2024-11-28 00:13:00.320616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.835 [2024-11-28 00:13:00.320624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.835 [2024-11-28 00:13:00.320636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.320674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.836 [2024-11-28 00:13:00.320682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.836 [2024-11-28 00:13:00.320689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.836 [2024-11-28 00:13:00.320696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.320751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.836 [2024-11-28 00:13:00.320760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.836 [2024-11-28 00:13:00.320767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.836 [2024-11-28 00:13:00.320775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.320808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.836 [2024-11-28 00:13:00.320816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.836 [2024-11-28 00:13:00.320824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.836 [2024-11-28 00:13:00.320831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.320862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.836 [2024-11-28 00:13:00.320870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.836 [2024-11-28 00:13:00.320878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.836 [2024-11-28 00:13:00.320884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.320926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.836 [2024-11-28 00:13:00.320934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.836 [2024-11-28 00:13:00.320942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.836 [2024-11-28 00:13:00.320948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.836 [2024-11-28 00:13:00.321058] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 98.515 ms, result 0 00:18:46.094 00:18:46.094 00:18:46.094 00:13:00 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:47.996 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:47.996 00:13:02 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:18:47.996 00:13:02 -- ftl/restore.sh@85 -- # restore_kill 00:18:47.997 00:13:02 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:48.255 00:13:02 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:48.255 00:13:02 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.255 Process with pid 83773 is not found 00:18:48.255 Remove shared memory files 00:18:48.255 00:13:02 -- ftl/restore.sh@32 -- # killprocess 83773 00:18:48.255 00:13:02 -- common/autotest_common.sh@936 -- # '[' -z 83773 ']' 00:18:48.255 00:13:02 -- common/autotest_common.sh@940 -- # kill -0 83773 00:18:48.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83773) - No such process 00:18:48.255 00:13:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83773 is not found' 00:18:48.255 00:13:02 -- ftl/restore.sh@33 -- # remove_shm 00:18:48.255 00:13:02 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:48.255 00:13:02 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:48.256 00:13:02 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:48.256 00:13:02 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:48.256 00:13:02 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:48.256 00:13:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:48.256 ************************************ 00:18:48.256 END TEST ftl_restore 00:18:48.256 ************************************ 00:18:48.256 00:18:48.256 real 2m50.419s 00:18:48.256 user 2m39.702s 00:18:48.256 sys 0m11.500s 00:18:48.256 00:13:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:48.256 00:13:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.256 00:13:02 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:48.256 00:13:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:48.256 00:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.256 00:13:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.256 ************************************ 00:18:48.256 START TEST ftl_dirty_shutdown 00:18:48.256 ************************************ 00:18:48.256 00:13:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:48.256 * Looking for test storage... 00:18:48.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.256 00:13:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:48.256 00:13:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:48.256 00:13:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:48.514 00:13:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:48.514 00:13:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:48.514 00:13:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:48.514 00:13:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:48.514 00:13:02 -- scripts/common.sh@335 -- # IFS=.-: 00:18:48.514 00:13:02 -- scripts/common.sh@335 -- # read -ra ver1 00:18:48.514 00:13:02 -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.514 00:13:02 -- scripts/common.sh@336 -- # read -ra ver2 00:18:48.514 00:13:02 -- scripts/common.sh@337 -- # local 'op=<' 00:18:48.514 00:13:02 -- scripts/common.sh@339 -- # ver1_l=2 00:18:48.514 00:13:02 -- scripts/common.sh@340 -- # ver2_l=1 00:18:48.514 00:13:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:48.514 00:13:02 -- scripts/common.sh@343 -- # case "$op" in 00:18:48.514 00:13:02 -- scripts/common.sh@344 -- # : 1 00:18:48.514 00:13:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:48.514 00:13:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.514 00:13:02 -- scripts/common.sh@364 -- # decimal 1 00:18:48.514 00:13:02 -- scripts/common.sh@352 -- # local d=1 00:18:48.514 00:13:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.514 00:13:02 -- scripts/common.sh@354 -- # echo 1 00:18:48.514 00:13:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:48.514 00:13:02 -- scripts/common.sh@365 -- # decimal 2 00:18:48.514 00:13:02 -- scripts/common.sh@352 -- # local d=2 00:18:48.514 00:13:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.515 00:13:02 -- scripts/common.sh@354 -- # echo 2 00:18:48.515 00:13:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:48.515 00:13:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:48.515 00:13:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:48.515 00:13:02 -- scripts/common.sh@367 -- # return 0 00:18:48.515 00:13:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.515 00:13:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.515 --rc genhtml_branch_coverage=1 00:18:48.515 --rc genhtml_function_coverage=1 00:18:48.515 --rc genhtml_legend=1 00:18:48.515 --rc geninfo_all_blocks=1 00:18:48.515 --rc geninfo_unexecuted_blocks=1 00:18:48.515 00:18:48.515 ' 00:18:48.515 00:13:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.515 --rc genhtml_branch_coverage=1 00:18:48.515 --rc genhtml_function_coverage=1 00:18:48.515 --rc genhtml_legend=1 00:18:48.515 --rc geninfo_all_blocks=1 00:18:48.515 --rc geninfo_unexecuted_blocks=1 00:18:48.515 00:18:48.515 ' 00:18:48.515 00:13:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.515 --rc genhtml_branch_coverage=1 00:18:48.515 --rc genhtml_function_coverage=1 00:18:48.515 --rc genhtml_legend=1 00:18:48.515 --rc geninfo_all_blocks=1 00:18:48.515 --rc geninfo_unexecuted_blocks=1 00:18:48.515 00:18:48.515 ' 00:18:48.515 00:13:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:48.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.515 --rc genhtml_branch_coverage=1 00:18:48.515 --rc genhtml_function_coverage=1 00:18:48.515 --rc genhtml_legend=1 00:18:48.515 --rc geninfo_all_blocks=1 00:18:48.515 --rc geninfo_unexecuted_blocks=1 00:18:48.515 00:18:48.515 ' 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:48.515 00:13:02 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:18:48.515 00:13:02 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.515 00:13:02 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.515 00:13:02 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:48.515 00:13:02 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:48.515 00:13:02 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.515 00:13:02 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:48.515 00:13:02 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:48.515 00:13:02 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.515 00:13:02 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.515 00:13:02 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:48.515 00:13:02 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:48.515 00:13:02 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:48.515 00:13:02 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:48.515 00:13:02 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:48.515 00:13:02 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:48.515 00:13:02 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.515 00:13:02 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.515 00:13:02 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:48.515 00:13:02 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:48.515 00:13:02 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:48.515 00:13:02 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:48.515 00:13:02 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:48.515 00:13:02 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:48.515 00:13:02 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:48.515 00:13:02 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:48.515 00:13:02 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:48.515 00:13:02 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@45 -- # svcpid=85699 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:48.515 00:13:02 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 85699 00:18:48.515 00:13:02 -- common/autotest_common.sh@829 -- # '[' -z 85699 ']' 00:18:48.515 00:13:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.515 00:13:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.515 00:13:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.515 00:13:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.515 00:13:02 -- common/autotest_common.sh@10 -- # set +x 00:18:48.515 [2024-11-28 00:13:02.947740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:48.515 [2024-11-28 00:13:02.947979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85699 ] 00:18:48.515 [2024-11-28 00:13:03.085225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.515 [2024-11-28 00:13:03.114626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:48.515 [2024-11-28 00:13:03.114966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.453 00:13:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.453 00:13:03 -- common/autotest_common.sh@862 -- # return 0 00:18:49.453 00:13:03 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:49.453 00:13:03 -- ftl/common.sh@54 -- # local name=nvme0 00:18:49.453 00:13:03 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:49.453 00:13:03 -- ftl/common.sh@56 -- # local size=103424 00:18:49.453 00:13:03 -- ftl/common.sh@59 -- # local base_bdev 00:18:49.453 00:13:03 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:49.453 00:13:04 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:49.453 00:13:04 -- ftl/common.sh@62 -- # local base_size 00:18:49.453 00:13:04 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:49.453 00:13:04 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:18:49.453 00:13:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:49.453 00:13:04 -- common/autotest_common.sh@1369 -- # local bs 00:18:49.453 00:13:04 -- common/autotest_common.sh@1370 -- # local nb 00:18:49.453 00:13:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:49.712 00:13:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:49.712 { 00:18:49.712 "name": "nvme0n1", 00:18:49.712 "aliases": [ 00:18:49.712 "15874787-7c54-4ffd-8a22-f16615568f17" 00:18:49.712 ], 00:18:49.712 "product_name": "NVMe disk", 00:18:49.712 "block_size": 4096, 00:18:49.712 "num_blocks": 1310720, 00:18:49.712 "uuid": "15874787-7c54-4ffd-8a22-f16615568f17", 00:18:49.712 "assigned_rate_limits": { 00:18:49.712 "rw_ios_per_sec": 0, 00:18:49.712 "rw_mbytes_per_sec": 0, 00:18:49.712 "r_mbytes_per_sec": 0, 00:18:49.712 "w_mbytes_per_sec": 0 00:18:49.712 }, 00:18:49.712 "claimed": true, 00:18:49.712 "claim_type": "read_many_write_one", 00:18:49.712 "zoned": false, 00:18:49.712 "supported_io_types": { 00:18:49.712 "read": true, 00:18:49.712 "write": true, 00:18:49.712 "unmap": true, 00:18:49.712 "write_zeroes": true, 00:18:49.712 "flush": true, 00:18:49.712 "reset": true, 00:18:49.712 "compare": true, 00:18:49.712 "compare_and_write": false, 00:18:49.712 "abort": true, 00:18:49.712 "nvme_admin": true, 00:18:49.712 "nvme_io": true 00:18:49.712 }, 00:18:49.712 "driver_specific": { 00:18:49.712 "nvme": [ 00:18:49.712 { 00:18:49.712 "pci_address": "0000:00:07.0", 00:18:49.712 "trid": { 00:18:49.712 "trtype": "PCIe", 00:18:49.712 "traddr": "0000:00:07.0" 00:18:49.712 }, 00:18:49.712 "ctrlr_data": { 00:18:49.712 "cntlid": 0, 00:18:49.712 "vendor_id": "0x1b36", 00:18:49.712 "model_number": "QEMU NVMe Ctrl", 00:18:49.712 "serial_number": "12341", 00:18:49.712 "firmware_revision": "8.0.0", 00:18:49.712 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:49.712 "oacs": { 00:18:49.712 "security": 0, 00:18:49.712 "format": 1, 00:18:49.712 "firmware": 0, 00:18:49.712 "ns_manage": 1 00:18:49.712 }, 00:18:49.712 "multi_ctrlr": false, 00:18:49.712 "ana_reporting": false 00:18:49.712 }, 00:18:49.712 "vs": { 00:18:49.712 "nvme_version": "1.4" 00:18:49.712 }, 00:18:49.712 "ns_data": { 00:18:49.712 "id": 1, 00:18:49.712 "can_share": false 00:18:49.712 } 00:18:49.712 } 00:18:49.712 ], 00:18:49.712 "mp_policy": "active_passive" 00:18:49.712 } 00:18:49.712 } 00:18:49.712 ]' 00:18:49.712 00:13:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:49.712 00:13:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:49.712 00:13:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:49.712 00:13:04 -- common/autotest_common.sh@1373 -- # nb=1310720 00:18:49.712 00:13:04 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:18:49.712 00:13:04 -- common/autotest_common.sh@1377 -- # echo 5120 00:18:49.712 00:13:04 -- ftl/common.sh@63 -- # base_size=5120 00:18:49.712 00:13:04 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:49.712 00:13:04 -- ftl/common.sh@67 -- # clear_lvols 00:18:49.712 00:13:04 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:49.712 00:13:04 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:49.971 00:13:04 -- ftl/common.sh@28 -- # stores=585eed1c-69a3-4099-ad86-79951c1f8520 00:18:49.971 00:13:04 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:49.971 00:13:04 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 585eed1c-69a3-4099-ad86-79951c1f8520 00:18:50.229 00:13:04 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:50.487 00:13:04 -- ftl/common.sh@68 -- # lvs=b009675b-d06f-47bb-be17-e74d747cd05c 00:18:50.487 00:13:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b009675b-d06f-47bb-be17-e74d747cd05c 00:18:50.487 00:13:05 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.487 00:13:05 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:18:50.487 00:13:05 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.487 00:13:05 -- ftl/common.sh@35 -- # local name=nvc0 00:18:50.487 00:13:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:50.487 00:13:05 -- ftl/common.sh@37 -- # local base_bdev=0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.487 00:13:05 -- ftl/common.sh@38 -- # local cache_size= 00:18:50.487 00:13:05 -- ftl/common.sh@41 -- # get_bdev_size 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.487 00:13:05 -- common/autotest_common.sh@1367 -- # local bdev_name=0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.487 00:13:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:50.487 00:13:05 -- common/autotest_common.sh@1369 -- # local bs 00:18:50.487 00:13:05 -- common/autotest_common.sh@1370 -- # local nb 00:18:50.487 00:13:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:50.744 00:13:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:50.744 { 00:18:50.744 "name": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:50.744 "aliases": [ 00:18:50.744 "lvs/nvme0n1p0" 00:18:50.744 ], 00:18:50.744 "product_name": "Logical Volume", 00:18:50.744 "block_size": 4096, 00:18:50.744 "num_blocks": 26476544, 00:18:50.744 "uuid": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:50.744 "assigned_rate_limits": { 00:18:50.744 "rw_ios_per_sec": 0, 00:18:50.744 "rw_mbytes_per_sec": 0, 00:18:50.744 "r_mbytes_per_sec": 0, 00:18:50.744 "w_mbytes_per_sec": 0 00:18:50.744 }, 00:18:50.744 "claimed": false, 00:18:50.744 "zoned": false, 00:18:50.744 "supported_io_types": { 00:18:50.744 "read": true, 00:18:50.744 "write": true, 00:18:50.744 "unmap": true, 00:18:50.744 "write_zeroes": true, 00:18:50.744 "flush": false, 00:18:50.744 "reset": true, 00:18:50.744 "compare": false, 00:18:50.744 "compare_and_write": false, 00:18:50.744 "abort": false, 00:18:50.744 "nvme_admin": false, 00:18:50.744 "nvme_io": false 00:18:50.744 }, 00:18:50.744 "driver_specific": { 00:18:50.744 "lvol": { 00:18:50.744 "lvol_store_uuid": "b009675b-d06f-47bb-be17-e74d747cd05c", 00:18:50.744 "base_bdev": "nvme0n1", 00:18:50.744 "thin_provision": true, 00:18:50.744 "snapshot": false, 00:18:50.744 "clone": false, 00:18:50.744 "esnap_clone": false 00:18:50.744 } 00:18:50.744 } 00:18:50.744 } 00:18:50.744 ]' 00:18:50.744 00:13:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:50.744 00:13:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:50.744 00:13:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:50.744 00:13:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:50.744 00:13:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:50.744 00:13:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:50.744 00:13:05 -- ftl/common.sh@41 -- # local base_size=5171 00:18:50.744 00:13:05 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:50.744 00:13:05 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:51.001 00:13:05 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:51.002 00:13:05 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:51.002 00:13:05 -- ftl/common.sh@48 -- # get_bdev_size 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.002 00:13:05 -- common/autotest_common.sh@1367 -- # local bdev_name=0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.002 00:13:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:51.002 00:13:05 -- common/autotest_common.sh@1369 -- # local bs 00:18:51.002 00:13:05 -- common/autotest_common.sh@1370 -- # local nb 00:18:51.002 00:13:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.260 00:13:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:51.260 { 00:18:51.260 "name": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:51.260 "aliases": [ 00:18:51.260 "lvs/nvme0n1p0" 00:18:51.260 ], 00:18:51.260 "product_name": "Logical Volume", 00:18:51.260 "block_size": 4096, 00:18:51.260 "num_blocks": 26476544, 00:18:51.260 "uuid": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:51.260 "assigned_rate_limits": { 00:18:51.260 "rw_ios_per_sec": 0, 00:18:51.260 "rw_mbytes_per_sec": 0, 00:18:51.260 "r_mbytes_per_sec": 0, 00:18:51.260 "w_mbytes_per_sec": 0 00:18:51.260 }, 00:18:51.260 "claimed": false, 00:18:51.260 "zoned": false, 00:18:51.260 "supported_io_types": { 00:18:51.260 "read": true, 00:18:51.260 "write": true, 00:18:51.260 "unmap": true, 00:18:51.260 "write_zeroes": true, 00:18:51.260 "flush": false, 00:18:51.260 "reset": true, 00:18:51.260 "compare": false, 00:18:51.260 "compare_and_write": false, 00:18:51.260 "abort": false, 00:18:51.260 "nvme_admin": false, 00:18:51.260 "nvme_io": false 00:18:51.260 }, 00:18:51.260 "driver_specific": { 00:18:51.260 "lvol": { 00:18:51.260 "lvol_store_uuid": "b009675b-d06f-47bb-be17-e74d747cd05c", 00:18:51.260 "base_bdev": "nvme0n1", 00:18:51.260 "thin_provision": true, 00:18:51.260 "snapshot": false, 00:18:51.260 "clone": false, 00:18:51.260 "esnap_clone": false 00:18:51.260 } 00:18:51.260 } 00:18:51.260 } 00:18:51.260 ]' 00:18:51.260 00:13:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:51.260 00:13:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:51.260 00:13:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:51.260 00:13:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:51.260 00:13:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:51.260 00:13:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:51.260 00:13:05 -- ftl/common.sh@48 -- # cache_size=5171 00:18:51.260 00:13:05 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:51.519 00:13:05 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:18:51.519 00:13:05 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.519 00:13:05 -- common/autotest_common.sh@1367 -- # local bdev_name=0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.519 00:13:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:51.519 00:13:05 -- common/autotest_common.sh@1369 -- # local bs 00:18:51.519 00:13:05 -- common/autotest_common.sh@1370 -- # local nb 00:18:51.519 00:13:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0040eb83-c960-421f-85cb-e3b10cd2c5c8 00:18:51.778 00:13:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:51.778 { 00:18:51.778 "name": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:51.778 "aliases": [ 00:18:51.778 "lvs/nvme0n1p0" 00:18:51.778 ], 00:18:51.778 "product_name": "Logical Volume", 00:18:51.778 "block_size": 4096, 00:18:51.778 "num_blocks": 26476544, 00:18:51.778 "uuid": "0040eb83-c960-421f-85cb-e3b10cd2c5c8", 00:18:51.778 "assigned_rate_limits": { 00:18:51.778 "rw_ios_per_sec": 0, 00:18:51.778 "rw_mbytes_per_sec": 0, 00:18:51.778 "r_mbytes_per_sec": 0, 00:18:51.778 "w_mbytes_per_sec": 0 00:18:51.778 }, 00:18:51.778 "claimed": false, 00:18:51.778 "zoned": false, 00:18:51.778 "supported_io_types": { 00:18:51.778 "read": true, 00:18:51.778 "write": true, 00:18:51.778 "unmap": true, 00:18:51.778 "write_zeroes": true, 00:18:51.778 "flush": false, 00:18:51.778 "reset": true, 00:18:51.778 "compare": false, 00:18:51.778 "compare_and_write": false, 00:18:51.778 "abort": false, 00:18:51.778 "nvme_admin": false, 00:18:51.778 "nvme_io": false 00:18:51.778 }, 00:18:51.778 "driver_specific": { 00:18:51.778 "lvol": { 00:18:51.778 "lvol_store_uuid": "b009675b-d06f-47bb-be17-e74d747cd05c", 00:18:51.778 "base_bdev": "nvme0n1", 00:18:51.778 "thin_provision": true, 00:18:51.778 "snapshot": false, 00:18:51.778 "clone": false, 00:18:51.778 "esnap_clone": false 00:18:51.778 } 00:18:51.778 } 00:18:51.778 } 00:18:51.778 ]' 00:18:51.778 00:13:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:51.778 00:13:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:51.778 00:13:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:51.778 00:13:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:51.778 00:13:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:51.778 00:13:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0040eb83-c960-421f-85cb-e3b10cd2c5c8 --l2p_dram_limit 10' 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:51.778 00:13:06 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0040eb83-c960-421f-85cb-e3b10cd2c5c8 --l2p_dram_limit 10 -c nvc0n1p0 00:18:51.778 [2024-11-28 00:13:06.376474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.778 [2024-11-28 00:13:06.376515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:51.778 [2024-11-28 00:13:06.376530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:51.778 [2024-11-28 00:13:06.376536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.778 [2024-11-28 00:13:06.376579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.778 [2024-11-28 00:13:06.376589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.778 [2024-11-28 00:13:06.376598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:51.778 [2024-11-28 00:13:06.376604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.778 [2024-11-28 00:13:06.376619] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:51.778 [2024-11-28 00:13:06.376842] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:51.778 [2024-11-28 00:13:06.376857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.778 [2024-11-28 00:13:06.376863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.778 [2024-11-28 00:13:06.376871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:18:51.778 [2024-11-28 00:13:06.376877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.778 [2024-11-28 00:13:06.376900] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e292497b-eef2-460b-acbc-ebae96479fae 00:18:51.778 [2024-11-28 00:13:06.377928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.778 [2024-11-28 00:13:06.377953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:51.778 [2024-11-28 00:13:06.377961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:51.778 [2024-11-28 00:13:06.377968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.382599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.038 [2024-11-28 00:13:06.382721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.038 [2024-11-28 00:13:06.382733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:18:52.038 [2024-11-28 00:13:06.382744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.382811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.038 [2024-11-28 00:13:06.382819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.038 [2024-11-28 00:13:06.382825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:52.038 [2024-11-28 00:13:06.382833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.382863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.038 [2024-11-28 00:13:06.382873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:52.038 [2024-11-28 00:13:06.382881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:52.038 [2024-11-28 00:13:06.382888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.382907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:52.038 [2024-11-28 00:13:06.384120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.038 [2024-11-28 00:13:06.384137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.038 [2024-11-28 00:13:06.384145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:18:52.038 [2024-11-28 00:13:06.384151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.384179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.038 [2024-11-28 00:13:06.384186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:52.038 [2024-11-28 00:13:06.384196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:52.038 [2024-11-28 00:13:06.384202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.038 [2024-11-28 00:13:06.384215] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:52.038 [2024-11-28 00:13:06.384299] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:52.038 [2024-11-28 00:13:06.384309] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:52.038 [2024-11-28 00:13:06.384319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:52.039 [2024-11-28 00:13:06.384334] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384341] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384348] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:52.039 [2024-11-28 00:13:06.384354] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:52.039 [2024-11-28 00:13:06.384383] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:52.039 [2024-11-28 00:13:06.384392] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:52.039 [2024-11-28 00:13:06.384399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.384405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:52.039 [2024-11-28 00:13:06.384412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:52.039 [2024-11-28 00:13:06.384417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.039 [2024-11-28 00:13:06.384468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.384474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:52.039 [2024-11-28 00:13:06.384483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:52.039 [2024-11-28 00:13:06.384488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.039 [2024-11-28 00:13:06.384547] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:52.039 [2024-11-28 00:13:06.384555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:52.039 [2024-11-28 00:13:06.384562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:52.039 [2024-11-28 00:13:06.384581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:52.039 [2024-11-28 00:13:06.384598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.039 [2024-11-28 00:13:06.384609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:52.039 [2024-11-28 00:13:06.384615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:52.039 [2024-11-28 00:13:06.384622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.039 [2024-11-28 00:13:06.384631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:52.039 [2024-11-28 00:13:06.384638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:52.039 [2024-11-28 00:13:06.384643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:52.039 [2024-11-28 00:13:06.384654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:52.039 [2024-11-28 00:13:06.384661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:52.039 [2024-11-28 00:13:06.384672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:52.039 [2024-11-28 00:13:06.384676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:52.039 [2024-11-28 00:13:06.384687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:52.039 [2024-11-28 00:13:06.384705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:52.039 [2024-11-28 00:13:06.384723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:52.039 [2024-11-28 00:13:06.384741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:52.039 [2024-11-28 00:13:06.384756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.039 [2024-11-28 00:13:06.384767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:52.039 [2024-11-28 00:13:06.384774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:52.039 [2024-11-28 00:13:06.384780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.039 [2024-11-28 00:13:06.384786] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:52.039 [2024-11-28 00:13:06.384794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:52.039 [2024-11-28 00:13:06.384801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.039 [2024-11-28 00:13:06.384816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:52.039 [2024-11-28 00:13:06.384825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:52.039 [2024-11-28 00:13:06.384832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:52.039 [2024-11-28 00:13:06.384838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:52.039 [2024-11-28 00:13:06.384845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:52.039 [2024-11-28 00:13:06.384852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:52.039 [2024-11-28 00:13:06.384860] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:52.039 [2024-11-28 00:13:06.384868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.039 [2024-11-28 00:13:06.384876] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:52.039 [2024-11-28 00:13:06.384882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:52.039 [2024-11-28 00:13:06.384889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:52.039 [2024-11-28 00:13:06.384895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:52.039 [2024-11-28 00:13:06.384904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:52.039 [2024-11-28 00:13:06.384910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:52.039 [2024-11-28 00:13:06.384917] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:52.039 [2024-11-28 00:13:06.384923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:52.039 [2024-11-28 00:13:06.384932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:52.039 [2024-11-28 00:13:06.384938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:52.039 [2024-11-28 00:13:06.384945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:52.039 [2024-11-28 00:13:06.384951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:52.039 [2024-11-28 00:13:06.384959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:52.039 [2024-11-28 00:13:06.384965] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:52.039 [2024-11-28 00:13:06.384973] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.039 [2024-11-28 00:13:06.384980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:52.039 [2024-11-28 00:13:06.384987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:52.039 [2024-11-28 00:13:06.384993] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:52.039 [2024-11-28 00:13:06.385001] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:52.039 [2024-11-28 00:13:06.385007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.385014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:52.039 [2024-11-28 00:13:06.385020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:18:52.039 [2024-11-28 00:13:06.385030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.039 [2024-11-28 00:13:06.390597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.390684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.039 [2024-11-28 00:13:06.390748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.535 ms 00:18:52.039 [2024-11-28 00:13:06.390769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.039 [2024-11-28 00:13:06.390883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.390947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:52.039 [2024-11-28 00:13:06.390966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:52.039 [2024-11-28 00:13:06.390982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.039 [2024-11-28 00:13:06.398743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.039 [2024-11-28 00:13:06.398834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.040 [2024-11-28 00:13:06.398879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.711 ms 00:18:52.040 [2024-11-28 00:13:06.398899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.398953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.398973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.040 [2024-11-28 00:13:06.398988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:52.040 [2024-11-28 00:13:06.399025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.399314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.399389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.040 [2024-11-28 00:13:06.399453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:18:52.040 [2024-11-28 00:13:06.399488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.399584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.399642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.040 [2024-11-28 00:13:06.399659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:52.040 [2024-11-28 00:13:06.399681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.404467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.404551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.040 [2024-11-28 00:13:06.404592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:18:52.040 [2024-11-28 00:13:06.404633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.411278] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:52.040 [2024-11-28 00:13:06.413583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.413660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:52.040 [2024-11-28 00:13:06.413705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.881 ms 00:18:52.040 [2024-11-28 00:13:06.413722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.474866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.040 [2024-11-28 00:13:06.475021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:52.040 [2024-11-28 00:13:06.475081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.107 ms 00:18:52.040 [2024-11-28 00:13:06.475105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.040 [2024-11-28 00:13:06.475153] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:52.040 [2024-11-28 00:13:06.475192] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:54.571 [2024-11-28 00:13:09.012631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.012841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:54.571 [2024-11-28 00:13:09.012901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2537.460 ms 00:18:54.571 [2024-11-28 00:13:09.012921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.013082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.013104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.571 [2024-11-28 00:13:09.013158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:54.571 [2024-11-28 00:13:09.013177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.015866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.015963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:54.571 [2024-11-28 00:13:09.016015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:18:54.571 [2024-11-28 00:13:09.016033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.017962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.018048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:54.571 [2024-11-28 00:13:09.018112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:18:54.571 [2024-11-28 00:13:09.018127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.018269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.018288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.571 [2024-11-28 00:13:09.018309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:54.571 [2024-11-28 00:13:09.018353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.038641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.038753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:54.571 [2024-11-28 00:13:09.038818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.229 ms 00:18:54.571 [2024-11-28 00:13:09.038838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.042496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.042583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:54.571 [2024-11-28 00:13:09.042629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:18:54.571 [2024-11-28 00:13:09.042648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.043618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.043696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:54.571 [2024-11-28 00:13:09.043737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:18:54.571 [2024-11-28 00:13:09.043758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.046616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.046701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.571 [2024-11-28 00:13:09.046715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:18:54.571 [2024-11-28 00:13:09.046721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.046753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.046761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.571 [2024-11-28 00:13:09.046769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.571 [2024-11-28 00:13:09.046775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.046827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.571 [2024-11-28 00:13:09.046834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.571 [2024-11-28 00:13:09.046844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:54.571 [2024-11-28 00:13:09.046850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.571 [2024-11-28 00:13:09.047510] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2670.734 ms, result 0 00:18:54.571 { 00:18:54.571 "name": "ftl0", 00:18:54.571 "uuid": "e292497b-eef2-460b-acbc-ebae96479fae" 00:18:54.571 } 00:18:54.571 00:13:09 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:18:54.571 00:13:09 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:54.829 00:13:09 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:18:54.829 00:13:09 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:18:54.829 00:13:09 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:18:55.087 /dev/nbd0 00:18:55.087 00:13:09 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:18:55.087 00:13:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:55.087 00:13:09 -- common/autotest_common.sh@867 -- # local i 00:18:55.087 00:13:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:55.087 00:13:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:55.087 00:13:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:55.087 00:13:09 -- common/autotest_common.sh@871 -- # break 00:18:55.087 00:13:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:55.087 00:13:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:55.087 00:13:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:18:55.087 1+0 records in 00:18:55.087 1+0 records out 00:18:55.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305131 s, 13.4 MB/s 00:18:55.087 00:13:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:18:55.087 00:13:09 -- common/autotest_common.sh@884 -- # size=4096 00:18:55.087 00:13:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:18:55.087 00:13:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:55.087 00:13:09 -- common/autotest_common.sh@887 -- # return 0 00:18:55.087 00:13:09 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:18:55.087 [2024-11-28 00:13:09.506760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:55.087 [2024-11-28 00:13:09.506865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85831 ] 00:18:55.087 [2024-11-28 00:13:09.653485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.087 [2024-11-28 00:13:09.682540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.462  [2024-11-28T00:13:12.024Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-28T00:13:12.992Z] Copying: 393/1024 [MB] (196 MBps) [2024-11-28T00:13:13.926Z] Copying: 630/1024 [MB] (236 MBps) [2024-11-28T00:13:14.493Z] Copying: 886/1024 [MB] (256 MBps) [2024-11-28T00:13:14.493Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:18:59.891 00:18:59.891 00:13:14 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:02.423 00:13:16 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:19:02.423 [2024-11-28 00:13:16.466696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:02.423 [2024-11-28 00:13:16.466901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85917 ] 00:19:02.423 [2024-11-28 00:13:16.605423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.423 [2024-11-28 00:13:16.632567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.358  [2024-11-28T00:13:18.895Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-28T00:13:19.831Z] Copying: 58/1024 [MB] (29 MBps) [2024-11-28T00:13:20.766Z] Copying: 90/1024 [MB] (31 MBps) [2024-11-28T00:13:21.699Z] Copying: 117/1024 [MB] (27 MBps) [2024-11-28T00:13:23.072Z] Copying: 147/1024 [MB] (30 MBps) [2024-11-28T00:13:24.007Z] Copying: 178/1024 [MB] (31 MBps) [2024-11-28T00:13:24.942Z] Copying: 212/1024 [MB] (33 MBps) [2024-11-28T00:13:25.875Z] Copying: 245/1024 [MB] (33 MBps) [2024-11-28T00:13:26.809Z] Copying: 275/1024 [MB] (30 MBps) [2024-11-28T00:13:27.743Z] Copying: 306/1024 [MB] (30 MBps) [2024-11-28T00:13:28.678Z] Copying: 336/1024 [MB] (30 MBps) [2024-11-28T00:13:30.054Z] Copying: 370/1024 [MB] (33 MBps) [2024-11-28T00:13:30.989Z] Copying: 401/1024 [MB] (31 MBps) [2024-11-28T00:13:31.923Z] Copying: 432/1024 [MB] (30 MBps) [2024-11-28T00:13:32.856Z] Copying: 463/1024 [MB] (30 MBps) [2024-11-28T00:13:33.866Z] Copying: 493/1024 [MB] (30 MBps) [2024-11-28T00:13:34.821Z] Copying: 525/1024 [MB] (31 MBps) [2024-11-28T00:13:35.755Z] Copying: 556/1024 [MB] (30 MBps) [2024-11-28T00:13:36.689Z] Copying: 584/1024 [MB] (28 MBps) [2024-11-28T00:13:38.061Z] Copying: 615/1024 [MB] (30 MBps) [2024-11-28T00:13:38.995Z] Copying: 645/1024 [MB] (30 MBps) [2024-11-28T00:13:39.928Z] Copying: 676/1024 [MB] (31 MBps) [2024-11-28T00:13:40.863Z] Copying: 710/1024 [MB] (33 MBps) [2024-11-28T00:13:41.799Z] Copying: 742/1024 [MB] (31 MBps) [2024-11-28T00:13:42.733Z] Copying: 775/1024 [MB] (33 MBps) [2024-11-28T00:13:44.106Z] Copying: 807/1024 [MB] (31 MBps) [2024-11-28T00:13:45.040Z] Copying: 837/1024 [MB] (30 MBps) [2024-11-28T00:13:45.974Z] Copying: 868/1024 [MB] (31 MBps) [2024-11-28T00:13:46.909Z] Copying: 900/1024 [MB] (31 MBps) [2024-11-28T00:13:47.844Z] Copying: 937/1024 [MB] (37 MBps) [2024-11-28T00:13:48.778Z] Copying: 968/1024 [MB] (30 MBps) [2024-11-28T00:13:49.712Z] Copying: 998/1024 [MB] (30 MBps) [2024-11-28T00:13:49.712Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:19:35.110 00:19:35.110 00:13:49 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:19:35.110 00:13:49 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:19:35.368 00:13:49 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:35.628 [2024-11-28 00:13:50.026823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 00:13:50.026877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.628 [2024-11-28 00:13:50.026894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.628 [2024-11-28 00:13:50.026905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.628 [2024-11-28 00:13:50.026930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.628 [2024-11-28 00:13:50.027400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 00:13:50.027418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.628 [2024-11-28 00:13:50.027433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:19:35.628 [2024-11-28 00:13:50.027444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.628 [2024-11-28 00:13:50.029351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 00:13:50.029401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.628 [2024-11-28 00:13:50.029418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:19:35.628 [2024-11-28 00:13:50.029426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.628 [2024-11-28 00:13:50.043587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 00:13:50.043710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.628 [2024-11-28 00:13:50.043774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.136 ms 00:19:35.628 [2024-11-28 00:13:50.043823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.628 [2024-11-28 00:13:50.050150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 00:13:50.050259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:35.629 [2024-11-28 00:13:50.050329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:19:35.629 [2024-11-28 00:13:50.050358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.051921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.051951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.629 [2024-11-28 00:13:50.051962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:19:35.629 [2024-11-28 00:13:50.051969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.056150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.056183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.629 [2024-11-28 00:13:50.056197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:19:35.629 [2024-11-28 00:13:50.056205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.056327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.056340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.629 [2024-11-28 00:13:50.056350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:35.629 [2024-11-28 00:13:50.056357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.058301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.058334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:35.629 [2024-11-28 00:13:50.058344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:19:35.629 [2024-11-28 00:13:50.058351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.059826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.059856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:35.629 [2024-11-28 00:13:50.059867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:19:35.629 [2024-11-28 00:13:50.059874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.061163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.061192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.629 [2024-11-28 00:13:50.061202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:19:35.629 [2024-11-28 00:13:50.061208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.062435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.629 [2024-11-28 00:13:50.062473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.629 [2024-11-28 00:13:50.062483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:19:35.629 [2024-11-28 00:13:50.062490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.629 [2024-11-28 00:13:50.062521] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.629 [2024-11-28 00:13:50.062535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.629 [2024-11-28 00:13:50.062895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.062999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.630 [2024-11-28 00:13:50.063389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.630 [2024-11-28 00:13:50.063400] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e292497b-eef2-460b-acbc-ebae96479fae 00:19:35.630 [2024-11-28 00:13:50.063409] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.630 [2024-11-28 00:13:50.063418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.630 [2024-11-28 00:13:50.063425] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.630 [2024-11-28 00:13:50.063434] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.630 [2024-11-28 00:13:50.063440] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.630 [2024-11-28 00:13:50.063451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.630 [2024-11-28 00:13:50.063458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.630 [2024-11-28 00:13:50.063465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.630 [2024-11-28 00:13:50.063471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.630 [2024-11-28 00:13:50.063479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.630 [2024-11-28 00:13:50.063491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.630 [2024-11-28 00:13:50.063504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:19:35.630 [2024-11-28 00:13:50.063510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.630 [2024-11-28 00:13:50.064894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.631 [2024-11-28 00:13:50.064915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.631 [2024-11-28 00:13:50.064925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:19:35.631 [2024-11-28 00:13:50.064932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.064986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.631 [2024-11-28 00:13:50.064993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.631 [2024-11-28 00:13:50.065002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:35.631 [2024-11-28 00:13:50.065009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.070096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.070347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.631 [2024-11-28 00:13:50.070396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.070404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.070459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.070467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.631 [2024-11-28 00:13:50.070476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.070483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.070536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.070545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.631 [2024-11-28 00:13:50.070554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.070561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.070580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.070587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.631 [2024-11-28 00:13:50.070596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.070603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.079711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.079743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.631 [2024-11-28 00:13:50.079754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.079761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.631 [2024-11-28 00:13:50.083458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.631 [2024-11-28 00:13:50.083543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.631 [2024-11-28 00:13:50.083593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.631 [2024-11-28 00:13:50.083682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.631 [2024-11-28 00:13:50.083741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.631 [2024-11-28 00:13:50.083803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.083850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.631 [2024-11-28 00:13:50.083859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.631 [2024-11-28 00:13:50.083868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.631 [2024-11-28 00:13:50.083875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.631 [2024-11-28 00:13:50.084006] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.148 ms, result 0 00:19:35.631 true 00:19:35.631 00:13:50 -- ftl/dirty_shutdown.sh@83 -- # kill -9 85699 00:19:35.631 00:13:50 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid85699 00:19:35.631 00:13:50 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:19:35.631 [2024-11-28 00:13:50.166145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:35.631 [2024-11-28 00:13:50.166249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86304 ] 00:19:35.892 [2024-11-28 00:13:50.314127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.892 [2024-11-28 00:13:50.344156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.826  [2024-11-28T00:13:52.401Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-28T00:13:53.776Z] Copying: 461/1024 [MB] (261 MBps) [2024-11-28T00:13:54.709Z] Copying: 721/1024 [MB] (259 MBps) [2024-11-28T00:13:54.709Z] Copying: 980/1024 [MB] (259 MBps) [2024-11-28T00:13:54.709Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:19:40.107 00:19:40.107 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 85699 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:19:40.107 00:13:54 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.365 [2024-11-28 00:13:54.763088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.365 [2024-11-28 00:13:54.763205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86357 ] 00:19:40.365 [2024-11-28 00:13:54.907956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.365 [2024-11-28 00:13:54.936670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.623 [2024-11-28 00:13:55.015982] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.623 [2024-11-28 00:13:55.016043] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.623 [2024-11-28 00:13:55.075439] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:40.623 [2024-11-28 00:13:55.075742] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:40.623 [2024-11-28 00:13:55.076125] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:40.881 [2024-11-28 00:13:55.260276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.260331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.881 [2024-11-28 00:13:55.260343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:40.881 [2024-11-28 00:13:55.260352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.260409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.260420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.881 [2024-11-28 00:13:55.260434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:40.881 [2024-11-28 00:13:55.260442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.260464] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.881 [2024-11-28 00:13:55.260778] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.881 [2024-11-28 00:13:55.260799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.260807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.881 [2024-11-28 00:13:55.260815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:19:40.881 [2024-11-28 00:13:55.260822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.261873] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.881 [2024-11-28 00:13:55.264043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.264080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.881 [2024-11-28 00:13:55.264089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:19:40.881 [2024-11-28 00:13:55.264097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.264145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.264155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.881 [2024-11-28 00:13:55.264167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:40.881 [2024-11-28 00:13:55.264177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.268738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.268769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.881 [2024-11-28 00:13:55.268782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:19:40.881 [2024-11-28 00:13:55.268791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.268863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.268875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.881 [2024-11-28 00:13:55.268885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:40.881 [2024-11-28 00:13:55.268895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.268935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.268945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.881 [2024-11-28 00:13:55.268957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.881 [2024-11-28 00:13:55.268964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.268990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.881 [2024-11-28 00:13:55.270293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.270324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.881 [2024-11-28 00:13:55.270337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:19:40.881 [2024-11-28 00:13:55.270347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.270401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.270410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.881 [2024-11-28 00:13:55.270422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:40.881 [2024-11-28 00:13:55.270429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.270450] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.881 [2024-11-28 00:13:55.270467] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:40.881 [2024-11-28 00:13:55.270501] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.881 [2024-11-28 00:13:55.270518] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:40.881 [2024-11-28 00:13:55.270588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:40.881 [2024-11-28 00:13:55.270598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.881 [2024-11-28 00:13:55.270608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:40.881 [2024-11-28 00:13:55.270617] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.881 [2024-11-28 00:13:55.270626] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.881 [2024-11-28 00:13:55.270638] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:40.881 [2024-11-28 00:13:55.270644] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.881 [2024-11-28 00:13:55.270655] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:40.881 [2024-11-28 00:13:55.270665] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:40.881 [2024-11-28 00:13:55.270672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.270679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.881 [2024-11-28 00:13:55.270686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:19:40.881 [2024-11-28 00:13:55.270693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.270754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.881 [2024-11-28 00:13:55.270762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.881 [2024-11-28 00:13:55.270769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:40.881 [2024-11-28 00:13:55.270779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.881 [2024-11-28 00:13:55.270853] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.881 [2024-11-28 00:13:55.270870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.881 [2024-11-28 00:13:55.270878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.881 [2024-11-28 00:13:55.270889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.881 [2024-11-28 00:13:55.270897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.881 [2024-11-28 00:13:55.270905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.881 [2024-11-28 00:13:55.270912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:40.881 [2024-11-28 00:13:55.270918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.881 [2024-11-28 00:13:55.270926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:40.881 [2024-11-28 00:13:55.270937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.881 [2024-11-28 00:13:55.270945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.881 [2024-11-28 00:13:55.270952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:40.881 [2024-11-28 00:13:55.270961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.881 [2024-11-28 00:13:55.270969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.881 [2024-11-28 00:13:55.270976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:40.881 [2024-11-28 00:13:55.270984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.881 [2024-11-28 00:13:55.270991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.881 [2024-11-28 00:13:55.270998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:40.881 [2024-11-28 00:13:55.271005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.881 [2024-11-28 00:13:55.271013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:40.881 [2024-11-28 00:13:55.271020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:40.881 [2024-11-28 00:13:55.271031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:40.881 [2024-11-28 00:13:55.271039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.881 [2024-11-28 00:13:55.271046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:40.881 [2024-11-28 00:13:55.271054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.881 [2024-11-28 00:13:55.271061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.881 [2024-11-28 00:13:55.271069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:40.881 [2024-11-28 00:13:55.271076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.881 [2024-11-28 00:13:55.271083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.881 [2024-11-28 00:13:55.271090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:40.882 [2024-11-28 00:13:55.271097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.882 [2024-11-28 00:13:55.271105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.882 [2024-11-28 00:13:55.271112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:40.882 [2024-11-28 00:13:55.271119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:40.882 [2024-11-28 00:13:55.271126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.882 [2024-11-28 00:13:55.271133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:40.882 [2024-11-28 00:13:55.271140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.882 [2024-11-28 00:13:55.271152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.882 [2024-11-28 00:13:55.271160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:40.882 [2024-11-28 00:13:55.271167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.882 [2024-11-28 00:13:55.271173] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.882 [2024-11-28 00:13:55.271182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.882 [2024-11-28 00:13:55.271190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.882 [2024-11-28 00:13:55.271197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.882 [2024-11-28 00:13:55.271208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.882 [2024-11-28 00:13:55.271216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.882 [2024-11-28 00:13:55.271223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.882 [2024-11-28 00:13:55.271231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.882 [2024-11-28 00:13:55.271238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.882 [2024-11-28 00:13:55.271245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.882 [2024-11-28 00:13:55.271253] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.882 [2024-11-28 00:13:55.271266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.882 [2024-11-28 00:13:55.271277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:40.882 [2024-11-28 00:13:55.271287] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:40.882 [2024-11-28 00:13:55.271295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:40.882 [2024-11-28 00:13:55.271303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:40.882 [2024-11-28 00:13:55.271311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:40.882 [2024-11-28 00:13:55.271320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:40.882 [2024-11-28 00:13:55.271328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:40.882 [2024-11-28 00:13:55.271336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:40.882 [2024-11-28 00:13:55.271345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:40.882 [2024-11-28 00:13:55.271353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:40.882 [2024-11-28 00:13:55.271376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:40.882 [2024-11-28 00:13:55.271384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:40.882 [2024-11-28 00:13:55.271392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:40.882 [2024-11-28 00:13:55.271398] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.882 [2024-11-28 00:13:55.271410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.882 [2024-11-28 00:13:55.271417] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.882 [2024-11-28 00:13:55.271424] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.882 [2024-11-28 00:13:55.271435] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.882 [2024-11-28 00:13:55.271442] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.882 [2024-11-28 00:13:55.271450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.271456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.882 [2024-11-28 00:13:55.271467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:19:40.882 [2024-11-28 00:13:55.271473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.277353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.277416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.882 [2024-11-28 00:13:55.277426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.844 ms 00:19:40.882 [2024-11-28 00:13:55.277435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.277517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.277525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.882 [2024-11-28 00:13:55.277537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:40.882 [2024-11-28 00:13:55.277544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.294982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.295038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.882 [2024-11-28 00:13:55.295056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.398 ms 00:19:40.882 [2024-11-28 00:13:55.295073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.295131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.295145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.882 [2024-11-28 00:13:55.295158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.882 [2024-11-28 00:13:55.295170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.295615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.295655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.882 [2024-11-28 00:13:55.295670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:19:40.882 [2024-11-28 00:13:55.295684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.295870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.295898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.882 [2024-11-28 00:13:55.295912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:40.882 [2024-11-28 00:13:55.295924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.302290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.302341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.882 [2024-11-28 00:13:55.302355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.336 ms 00:19:40.882 [2024-11-28 00:13:55.302383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.305119] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:40.882 [2024-11-28 00:13:55.305193] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.882 [2024-11-28 00:13:55.305214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.305226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.882 [2024-11-28 00:13:55.305237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.717 ms 00:19:40.882 [2024-11-28 00:13:55.305256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.319889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.319931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.882 [2024-11-28 00:13:55.319941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.580 ms 00:19:40.882 [2024-11-28 00:13:55.319949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.321771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.321805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.882 [2024-11-28 00:13:55.321814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:19:40.882 [2024-11-28 00:13:55.321821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.323183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.323215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.882 [2024-11-28 00:13:55.323223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:19:40.882 [2024-11-28 00:13:55.323230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.323435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.323455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.882 [2024-11-28 00:13:55.323464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:40.882 [2024-11-28 00:13:55.323472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.342027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.342075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.882 [2024-11-28 00:13:55.342086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.539 ms 00:19:40.882 [2024-11-28 00:13:55.342103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.882 [2024-11-28 00:13:55.349572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:40.882 [2024-11-28 00:13:55.352043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.882 [2024-11-28 00:13:55.352073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.883 [2024-11-28 00:13:55.352084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.892 ms 00:19:40.883 [2024-11-28 00:13:55.352091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.352157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.352170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.883 [2024-11-28 00:13:55.352178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.883 [2024-11-28 00:13:55.352185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.352243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.352252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.883 [2024-11-28 00:13:55.352271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:40.883 [2024-11-28 00:13:55.352278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.353566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.353597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:40.883 [2024-11-28 00:13:55.353608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:19:40.883 [2024-11-28 00:13:55.353615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.353643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.353651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.883 [2024-11-28 00:13:55.353659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.883 [2024-11-28 00:13:55.353665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.353698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.883 [2024-11-28 00:13:55.353711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.353718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.883 [2024-11-28 00:13:55.353726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:40.883 [2024-11-28 00:13:55.353738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.356963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.356997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.883 [2024-11-28 00:13:55.357011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:19:40.883 [2024-11-28 00:13:55.357019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.357082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.883 [2024-11-28 00:13:55.357091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.883 [2024-11-28 00:13:55.357099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:40.883 [2024-11-28 00:13:55.357106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.883 [2024-11-28 00:13:55.358069] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.405 ms, result 0 00:19:41.816  [2024-11-28T00:13:57.791Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-28T00:13:58.723Z] Copying: 87/1024 [MB] (44 MBps) [2024-11-28T00:13:59.657Z] Copying: 127/1024 [MB] (39 MBps) [2024-11-28T00:14:00.589Z] Copying: 166/1024 [MB] (38 MBps) [2024-11-28T00:14:01.523Z] Copying: 210/1024 [MB] (44 MBps) [2024-11-28T00:14:02.456Z] Copying: 253/1024 [MB] (42 MBps) [2024-11-28T00:14:03.390Z] Copying: 298/1024 [MB] (44 MBps) [2024-11-28T00:14:04.764Z] Copying: 341/1024 [MB] (43 MBps) [2024-11-28T00:14:05.697Z] Copying: 386/1024 [MB] (44 MBps) [2024-11-28T00:14:06.632Z] Copying: 430/1024 [MB] (44 MBps) [2024-11-28T00:14:07.567Z] Copying: 475/1024 [MB] (45 MBps) [2024-11-28T00:14:08.502Z] Copying: 520/1024 [MB] (44 MBps) [2024-11-28T00:14:09.436Z] Copying: 564/1024 [MB] (44 MBps) [2024-11-28T00:14:10.808Z] Copying: 609/1024 [MB] (45 MBps) [2024-11-28T00:14:11.373Z] Copying: 654/1024 [MB] (44 MBps) [2024-11-28T00:14:12.743Z] Copying: 698/1024 [MB] (44 MBps) [2024-11-28T00:14:13.680Z] Copying: 743/1024 [MB] (44 MBps) [2024-11-28T00:14:14.616Z] Copying: 787/1024 [MB] (44 MBps) [2024-11-28T00:14:15.552Z] Copying: 833/1024 [MB] (45 MBps) [2024-11-28T00:14:16.487Z] Copying: 879/1024 [MB] (46 MBps) [2024-11-28T00:14:17.422Z] Copying: 925/1024 [MB] (45 MBps) [2024-11-28T00:14:18.796Z] Copying: 972/1024 [MB] (46 MBps) [2024-11-28T00:14:19.731Z] Copying: 1016/1024 [MB] (44 MBps) [2024-11-28T00:14:19.731Z] Copying: 1048480/1048576 [kB] (7440 kBps) [2024-11-28T00:14:19.731Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-28 00:14:19.486745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.486794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.129 [2024-11-28 00:14:19.486808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:05.129 [2024-11-28 00:14:19.486817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.487704] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.129 [2024-11-28 00:14:19.490083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.490121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.129 [2024-11-28 00:14:19.490130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:20:05.129 [2024-11-28 00:14:19.490138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.502536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.502573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.129 [2024-11-28 00:14:19.502585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.268 ms 00:20:05.129 [2024-11-28 00:14:19.502594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.522062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.522096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.129 [2024-11-28 00:14:19.522112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.452 ms 00:20:05.129 [2024-11-28 00:14:19.522119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.528202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.528231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:05.129 [2024-11-28 00:14:19.528241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.056 ms 00:20:05.129 [2024-11-28 00:14:19.528250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.529481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.529513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.129 [2024-11-28 00:14:19.529521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:20:05.129 [2024-11-28 00:14:19.529528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.532673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.532709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.129 [2024-11-28 00:14:19.532718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:20:05.129 [2024-11-28 00:14:19.532725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.586937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.586988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.129 [2024-11-28 00:14:19.587000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.178 ms 00:20:05.129 [2024-11-28 00:14:19.587007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.588763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.588794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.129 [2024-11-28 00:14:19.588803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:20:05.129 [2024-11-28 00:14:19.588810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.589946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.589980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.129 [2024-11-28 00:14:19.589990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:20:05.129 [2024-11-28 00:14:19.589996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.590836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.590867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.129 [2024-11-28 00:14:19.590876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:20:05.129 [2024-11-28 00:14:19.590883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.591706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-11-28 00:14:19.591735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.129 [2024-11-28 00:14:19.591744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:20:05.129 [2024-11-28 00:14:19.591750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-11-28 00:14:19.591776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.129 [2024-11-28 00:14:19.591789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122880 / 261120 wr_cnt: 1 state: open 00:20:05.129 [2024-11-28 00:14:19.591798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.591993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-11-28 00:14:19.592463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-11-28 00:14:19.592543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.131 [2024-11-28 00:14:19.592554] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e292497b-eef2-460b-acbc-ebae96479fae 00:20:05.131 [2024-11-28 00:14:19.592561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122880 00:20:05.131 [2024-11-28 00:14:19.592568] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123840 00:20:05.131 [2024-11-28 00:14:19.592575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122880 00:20:05.131 [2024-11-28 00:14:19.592583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:20:05.131 [2024-11-28 00:14:19.592590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.131 [2024-11-28 00:14:19.592597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.131 [2024-11-28 00:14:19.592604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.131 [2024-11-28 00:14:19.592610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.131 [2024-11-28 00:14:19.592616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.131 [2024-11-28 00:14:19.592623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.131 [2024-11-28 00:14:19.592631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.131 [2024-11-28 00:14:19.592639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:20:05.131 [2024-11-28 00:14:19.592646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.593972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.131 [2024-11-28 00:14:19.593996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.131 [2024-11-28 00:14:19.594005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:20:05.131 [2024-11-28 00:14:19.594012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.594068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.131 [2024-11-28 00:14:19.594076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.131 [2024-11-28 00:14:19.594086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:05.131 [2024-11-28 00:14:19.594093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.598816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.598853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.131 [2024-11-28 00:14:19.598868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.598876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.598928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.598936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.131 [2024-11-28 00:14:19.598950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.598960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.599017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.599027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.131 [2024-11-28 00:14:19.599038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.599045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.599059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.599066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.131 [2024-11-28 00:14:19.599073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.599082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.607047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.607083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.131 [2024-11-28 00:14:19.607092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.607099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.610561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.131 [2024-11-28 00:14:19.610572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.610581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.610625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.131 [2024-11-28 00:14:19.610632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.610639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.610684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.131 [2024-11-28 00:14:19.610692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.610698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.610886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.131 [2024-11-28 00:14:19.610894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.610901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.610936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.131 [2024-11-28 00:14:19.610947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.610954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.610994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.611002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.131 [2024-11-28 00:14:19.611009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.611016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.611058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.131 [2024-11-28 00:14:19.611066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.131 [2024-11-28 00:14:19.611077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.131 [2024-11-28 00:14:19.611087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-11-28 00:14:19.611203] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 127.272 ms, result 0 00:20:06.506 00:20:06.506 00:20:06.506 00:14:20 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:20:08.407 00:14:22 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:08.407 [2024-11-28 00:14:22.655647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:08.407 [2024-11-28 00:14:22.655763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86648 ] 00:20:08.407 [2024-11-28 00:14:22.804863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.407 [2024-11-28 00:14:22.835938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.407 [2024-11-28 00:14:22.920283] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.407 [2024-11-28 00:14:22.920355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.667 [2024-11-28 00:14:23.069837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.069886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:08.667 [2024-11-28 00:14:23.069899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:08.667 [2024-11-28 00:14:23.069906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.069948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.069957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.667 [2024-11-28 00:14:23.069965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:08.667 [2024-11-28 00:14:23.069974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.069992] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:08.667 [2024-11-28 00:14:23.070682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:08.667 [2024-11-28 00:14:23.070719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.070731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.667 [2024-11-28 00:14:23.070740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:20:08.667 [2024-11-28 00:14:23.070747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.071811] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:08.667 [2024-11-28 00:14:23.074353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.074404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:08.667 [2024-11-28 00:14:23.074418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:20:08.667 [2024-11-28 00:14:23.074425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.074476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.074485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:08.667 [2024-11-28 00:14:23.074493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:08.667 [2024-11-28 00:14:23.074500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.079311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.079346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.667 [2024-11-28 00:14:23.079355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:20:08.667 [2024-11-28 00:14:23.079372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.079442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.079451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.667 [2024-11-28 00:14:23.079458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:08.667 [2024-11-28 00:14:23.079465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.079506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.079515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:08.667 [2024-11-28 00:14:23.079526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:08.667 [2024-11-28 00:14:23.079535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.079561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.667 [2024-11-28 00:14:23.080860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.080890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.667 [2024-11-28 00:14:23.080899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:20:08.667 [2024-11-28 00:14:23.080905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.080937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.667 [2024-11-28 00:14:23.080944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:08.667 [2024-11-28 00:14:23.080953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.667 [2024-11-28 00:14:23.080960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.667 [2024-11-28 00:14:23.080978] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:08.667 [2024-11-28 00:14:23.080994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:08.667 [2024-11-28 00:14:23.081028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:08.668 [2024-11-28 00:14:23.081048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:08.668 [2024-11-28 00:14:23.081122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:08.668 [2024-11-28 00:14:23.081134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:08.668 [2024-11-28 00:14:23.081147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:08.668 [2024-11-28 00:14:23.081156] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081176] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081183] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:08.668 [2024-11-28 00:14:23.081195] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:08.668 [2024-11-28 00:14:23.081202] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:08.668 [2024-11-28 00:14:23.081209] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:08.668 [2024-11-28 00:14:23.081218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.668 [2024-11-28 00:14:23.081225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:08.668 [2024-11-28 00:14:23.081232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:08.668 [2024-11-28 00:14:23.081240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.668 [2024-11-28 00:14:23.081299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.668 [2024-11-28 00:14:23.081307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:08.668 [2024-11-28 00:14:23.081314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:08.668 [2024-11-28 00:14:23.081322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.668 [2024-11-28 00:14:23.081411] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:08.668 [2024-11-28 00:14:23.081422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:08.668 [2024-11-28 00:14:23.081430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:08.668 [2024-11-28 00:14:23.081453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:08.668 [2024-11-28 00:14:23.081474] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.668 [2024-11-28 00:14:23.081486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:08.668 [2024-11-28 00:14:23.081493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:08.668 [2024-11-28 00:14:23.081500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.668 [2024-11-28 00:14:23.081512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:08.668 [2024-11-28 00:14:23.081520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:08.668 [2024-11-28 00:14:23.081528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:08.668 [2024-11-28 00:14:23.081543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:08.668 [2024-11-28 00:14:23.081550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:08.668 [2024-11-28 00:14:23.081564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:08.668 [2024-11-28 00:14:23.081572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:08.668 [2024-11-28 00:14:23.081586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:08.668 [2024-11-28 00:14:23.081608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:08.668 [2024-11-28 00:14:23.081629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:08.668 [2024-11-28 00:14:23.081652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:08.668 [2024-11-28 00:14:23.081673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.668 [2024-11-28 00:14:23.081687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:08.668 [2024-11-28 00:14:23.081694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:08.668 [2024-11-28 00:14:23.081702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.668 [2024-11-28 00:14:23.081710] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:08.668 [2024-11-28 00:14:23.081718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:08.668 [2024-11-28 00:14:23.081725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.668 [2024-11-28 00:14:23.081748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:08.668 [2024-11-28 00:14:23.081755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:08.668 [2024-11-28 00:14:23.081763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:08.668 [2024-11-28 00:14:23.081772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:08.668 [2024-11-28 00:14:23.081779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:08.668 [2024-11-28 00:14:23.081787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:08.668 [2024-11-28 00:14:23.081796] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:08.668 [2024-11-28 00:14:23.081805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.668 [2024-11-28 00:14:23.081814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:08.668 [2024-11-28 00:14:23.081822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:08.668 [2024-11-28 00:14:23.081830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:08.668 [2024-11-28 00:14:23.081838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:08.668 [2024-11-28 00:14:23.081846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:08.668 [2024-11-28 00:14:23.081854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:08.668 [2024-11-28 00:14:23.081861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:08.668 [2024-11-28 00:14:23.081869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:08.668 [2024-11-28 00:14:23.081878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:08.668 [2024-11-28 00:14:23.081885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:08.668 [2024-11-28 00:14:23.081893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:08.668 [2024-11-28 00:14:23.081903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:08.668 [2024-11-28 00:14:23.081911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:08.668 [2024-11-28 00:14:23.081920] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:08.668 [2024-11-28 00:14:23.081932] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.668 [2024-11-28 00:14:23.081940] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:08.668 [2024-11-28 00:14:23.081949] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:08.668 [2024-11-28 00:14:23.081957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:08.668 [2024-11-28 00:14:23.081964] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:08.668 [2024-11-28 00:14:23.081972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.668 [2024-11-28 00:14:23.081988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:08.668 [2024-11-28 00:14:23.081996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:20:08.668 [2024-11-28 00:14:23.082010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.668 [2024-11-28 00:14:23.087987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.668 [2024-11-28 00:14:23.088017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.668 [2024-11-28 00:14:23.088032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:20:08.668 [2024-11-28 00:14:23.088039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.668 [2024-11-28 00:14:23.088119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.088127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.669 [2024-11-28 00:14:23.088135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:08.669 [2024-11-28 00:14:23.088142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.105261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.105305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.669 [2024-11-28 00:14:23.105321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.077 ms 00:20:08.669 [2024-11-28 00:14:23.105331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.105397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.105408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.669 [2024-11-28 00:14:23.105418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.669 [2024-11-28 00:14:23.105430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.105791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.105821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.669 [2024-11-28 00:14:23.105831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:08.669 [2024-11-28 00:14:23.105839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.105967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.105978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.669 [2024-11-28 00:14:23.105988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:08.669 [2024-11-28 00:14:23.105997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.111524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.111556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.669 [2024-11-28 00:14:23.111566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.499 ms 00:20:08.669 [2024-11-28 00:14:23.111572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.113953] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:08.669 [2024-11-28 00:14:23.113991] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:08.669 [2024-11-28 00:14:23.114001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.114008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:08.669 [2024-11-28 00:14:23.114016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:20:08.669 [2024-11-28 00:14:23.114022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.128494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.128528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:08.669 [2024-11-28 00:14:23.128549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:20:08.669 [2024-11-28 00:14:23.128557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.130664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.130695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:08.669 [2024-11-28 00:14:23.130704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:20:08.669 [2024-11-28 00:14:23.130710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.132329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.132358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:08.669 [2024-11-28 00:14:23.132376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:20:08.669 [2024-11-28 00:14:23.132383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.132563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.132573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:08.669 [2024-11-28 00:14:23.132581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:08.669 [2024-11-28 00:14:23.132590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.150904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.150942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:08.669 [2024-11-28 00:14:23.150959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.300 ms 00:20:08.669 [2024-11-28 00:14:23.150966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.158280] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:08.669 [2024-11-28 00:14:23.160458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.160485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:08.669 [2024-11-28 00:14:23.160502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.454 ms 00:20:08.669 [2024-11-28 00:14:23.160510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.160561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.160571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:08.669 [2024-11-28 00:14:23.160580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.669 [2024-11-28 00:14:23.160590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.161705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.161741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:08.669 [2024-11-28 00:14:23.161750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:20:08.669 [2024-11-28 00:14:23.161757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.162982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.163011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:08.669 [2024-11-28 00:14:23.163019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:20:08.669 [2024-11-28 00:14:23.163026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.163063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.163071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:08.669 [2024-11-28 00:14:23.163083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.669 [2024-11-28 00:14:23.163090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.163122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:08.669 [2024-11-28 00:14:23.163133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.163145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:08.669 [2024-11-28 00:14:23.163155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.669 [2024-11-28 00:14:23.163162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.166451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.166485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:08.669 [2024-11-28 00:14:23.166502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:20:08.669 [2024-11-28 00:14:23.166512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.166576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.669 [2024-11-28 00:14:23.166585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:08.669 [2024-11-28 00:14:23.166596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:08.669 [2024-11-28 00:14:23.166603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.669 [2024-11-28 00:14:23.172991] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.199 ms, result 0 00:20:10.045  [2024-11-28T00:14:25.581Z] Copying: 1124/1048576 [kB] (1124 kBps) [2024-11-28T00:14:26.516Z] Copying: 6156/1048576 [kB] (5032 kBps) [2024-11-28T00:14:27.452Z] Copying: 45/1024 [MB] (39 MBps) [2024-11-28T00:14:28.388Z] Copying: 77/1024 [MB] (32 MBps) [2024-11-28T00:14:29.354Z] Copying: 110/1024 [MB] (33 MBps) [2024-11-28T00:14:30.737Z] Copying: 136/1024 [MB] (25 MBps) [2024-11-28T00:14:31.672Z] Copying: 181/1024 [MB] (45 MBps) [2024-11-28T00:14:32.609Z] Copying: 209/1024 [MB] (28 MBps) [2024-11-28T00:14:33.543Z] Copying: 235/1024 [MB] (25 MBps) [2024-11-28T00:14:34.478Z] Copying: 267/1024 [MB] (32 MBps) [2024-11-28T00:14:35.412Z] Copying: 304/1024 [MB] (36 MBps) [2024-11-28T00:14:36.787Z] Copying: 333/1024 [MB] (29 MBps) [2024-11-28T00:14:37.353Z] Copying: 370/1024 [MB] (37 MBps) [2024-11-28T00:14:38.728Z] Copying: 403/1024 [MB] (32 MBps) [2024-11-28T00:14:39.663Z] Copying: 428/1024 [MB] (24 MBps) [2024-11-28T00:14:40.597Z] Copying: 446/1024 [MB] (18 MBps) [2024-11-28T00:14:41.533Z] Copying: 463/1024 [MB] (17 MBps) [2024-11-28T00:14:42.468Z] Copying: 481/1024 [MB] (17 MBps) [2024-11-28T00:14:43.403Z] Copying: 500/1024 [MB] (18 MBps) [2024-11-28T00:14:44.778Z] Copying: 520/1024 [MB] (19 MBps) [2024-11-28T00:14:45.710Z] Copying: 539/1024 [MB] (18 MBps) [2024-11-28T00:14:46.643Z] Copying: 558/1024 [MB] (19 MBps) [2024-11-28T00:14:47.657Z] Copying: 577/1024 [MB] (18 MBps) [2024-11-28T00:14:48.593Z] Copying: 595/1024 [MB] (18 MBps) [2024-11-28T00:14:49.529Z] Copying: 627/1024 [MB] (31 MBps) [2024-11-28T00:14:50.465Z] Copying: 653/1024 [MB] (26 MBps) [2024-11-28T00:14:51.401Z] Copying: 699/1024 [MB] (45 MBps) [2024-11-28T00:14:52.777Z] Copying: 723/1024 [MB] (24 MBps) [2024-11-28T00:14:53.714Z] Copying: 742/1024 [MB] (18 MBps) [2024-11-28T00:14:54.650Z] Copying: 774/1024 [MB] (31 MBps) [2024-11-28T00:14:55.584Z] Copying: 804/1024 [MB] (30 MBps) [2024-11-28T00:14:56.517Z] Copying: 834/1024 [MB] (30 MBps) [2024-11-28T00:14:57.451Z] Copying: 860/1024 [MB] (25 MBps) [2024-11-28T00:14:58.386Z] Copying: 886/1024 [MB] (25 MBps) [2024-11-28T00:14:59.760Z] Copying: 918/1024 [MB] (32 MBps) [2024-11-28T00:15:00.694Z] Copying: 943/1024 [MB] (24 MBps) [2024-11-28T00:15:01.629Z] Copying: 983/1024 [MB] (40 MBps) [2024-11-28T00:15:01.629Z] Copying: 1023/1024 [MB] (40 MBps) [2024-11-28T00:15:01.890Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-28 00:15:01.786175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.786239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.288 [2024-11-28 00:15:01.786253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:47.288 [2024-11-28 00:15:01.786260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.786281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.288 [2024-11-28 00:15:01.786825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.786850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.288 [2024-11-28 00:15:01.786859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:20:47.288 [2024-11-28 00:15:01.786869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.787094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.787105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.288 [2024-11-28 00:15:01.787179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:47.288 [2024-11-28 00:15:01.787188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.799120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.799156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.288 [2024-11-28 00:15:01.799167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.916 ms 00:20:47.288 [2024-11-28 00:15:01.799175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.805570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.805601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:47.288 [2024-11-28 00:15:01.805611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.363 ms 00:20:47.288 [2024-11-28 00:15:01.805618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.807886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.807917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.288 [2024-11-28 00:15:01.807926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:20:47.288 [2024-11-28 00:15:01.807933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.811581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.811614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.288 [2024-11-28 00:15:01.811624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:20:47.288 [2024-11-28 00:15:01.811631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.288 [2024-11-28 00:15:01.821631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.288 [2024-11-28 00:15:01.821670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.289 [2024-11-28 00:15:01.821682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.960 ms 00:20:47.289 [2024-11-28 00:15:01.821689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.289 [2024-11-28 00:15:01.823481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.289 [2024-11-28 00:15:01.823524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:47.289 [2024-11-28 00:15:01.823534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:20:47.289 [2024-11-28 00:15:01.823540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.289 [2024-11-28 00:15:01.824873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.289 [2024-11-28 00:15:01.824907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:47.289 [2024-11-28 00:15:01.824915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:20:47.289 [2024-11-28 00:15:01.824922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.289 [2024-11-28 00:15:01.826080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.289 [2024-11-28 00:15:01.826123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.289 [2024-11-28 00:15:01.826131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:20:47.289 [2024-11-28 00:15:01.826138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.289 [2024-11-28 00:15:01.827236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.289 [2024-11-28 00:15:01.827264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.289 [2024-11-28 00:15:01.827272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:20:47.289 [2024-11-28 00:15:01.827279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.289 [2024-11-28 00:15:01.827304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.289 [2024-11-28 00:15:01.827318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:47.289 [2024-11-28 00:15:01.827328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:20:47.289 [2024-11-28 00:15:01.827337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.289 [2024-11-28 00:15:01.827743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.827995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.290 [2024-11-28 00:15:01.828069] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.290 [2024-11-28 00:15:01.828077] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e292497b-eef2-460b-acbc-ebae96479fae 00:20:47.290 [2024-11-28 00:15:01.828084] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:20:47.290 [2024-11-28 00:15:01.828091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 143808 00:20:47.290 [2024-11-28 00:15:01.828102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 141824 00:20:47.290 [2024-11-28 00:15:01.828117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0140 00:20:47.290 [2024-11-28 00:15:01.828127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.290 [2024-11-28 00:15:01.828139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.290 [2024-11-28 00:15:01.828147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.290 [2024-11-28 00:15:01.828153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.290 [2024-11-28 00:15:01.828159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.290 [2024-11-28 00:15:01.828166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.290 [2024-11-28 00:15:01.828173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.290 [2024-11-28 00:15:01.828181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:20:47.290 [2024-11-28 00:15:01.828187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.290 [2024-11-28 00:15:01.829590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.290 [2024-11-28 00:15:01.829615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.290 [2024-11-28 00:15:01.829623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:20:47.290 [2024-11-28 00:15:01.829630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.290 [2024-11-28 00:15:01.829686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.290 [2024-11-28 00:15:01.829698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.290 [2024-11-28 00:15:01.829707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:47.290 [2024-11-28 00:15:01.829714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.290 [2024-11-28 00:15:01.834573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.290 [2024-11-28 00:15:01.834599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.290 [2024-11-28 00:15:01.834608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.290 [2024-11-28 00:15:01.834616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.290 [2024-11-28 00:15:01.834659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.290 [2024-11-28 00:15:01.834666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.291 [2024-11-28 00:15:01.834674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.834681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.834731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.834740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.291 [2024-11-28 00:15:01.834748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.834755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.834769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.834777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.291 [2024-11-28 00:15:01.834783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.834794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.842874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.842911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.291 [2024-11-28 00:15:01.842921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.842928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.291 [2024-11-28 00:15:01.846472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.291 [2024-11-28 00:15:01.846549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.291 [2024-11-28 00:15:01.846598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.291 [2024-11-28 00:15:01.846681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.291 [2024-11-28 00:15:01.846737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.291 [2024-11-28 00:15:01.846793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.291 [2024-11-28 00:15:01.846862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.291 [2024-11-28 00:15:01.846873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.291 [2024-11-28 00:15:01.846880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.291 [2024-11-28 00:15:01.846989] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.787 ms, result 0 00:20:47.550 00:20:47.550 00:20:47.550 00:15:02 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.084 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:50.084 00:15:04 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.084 [2024-11-28 00:15:04.229493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:50.084 [2024-11-28 00:15:04.229609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87083 ] 00:20:50.084 [2024-11-28 00:15:04.377862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.084 [2024-11-28 00:15:04.408604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.084 [2024-11-28 00:15:04.492457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.084 [2024-11-28 00:15:04.492521] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.084 [2024-11-28 00:15:04.642358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.642411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.084 [2024-11-28 00:15:04.642424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.084 [2024-11-28 00:15:04.642435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.642479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.642489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.084 [2024-11-28 00:15:04.642497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:50.084 [2024-11-28 00:15:04.642506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.642525] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.084 [2024-11-28 00:15:04.642814] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.084 [2024-11-28 00:15:04.642843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.642853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.084 [2024-11-28 00:15:04.642861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:50.084 [2024-11-28 00:15:04.642872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.643921] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.084 [2024-11-28 00:15:04.646423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.646457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.084 [2024-11-28 00:15:04.646469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:20:50.084 [2024-11-28 00:15:04.646477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.646522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.646531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.084 [2024-11-28 00:15:04.646539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:50.084 [2024-11-28 00:15:04.646546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.651238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.651269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.084 [2024-11-28 00:15:04.651278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:20:50.084 [2024-11-28 00:15:04.651291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.651355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.651377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.084 [2024-11-28 00:15:04.651384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:50.084 [2024-11-28 00:15:04.651391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.651445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.651455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.084 [2024-11-28 00:15:04.651466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:50.084 [2024-11-28 00:15:04.651473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.651499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.084 [2024-11-28 00:15:04.652799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.652824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.084 [2024-11-28 00:15:04.652832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:20:50.084 [2024-11-28 00:15:04.652839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.652867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.084 [2024-11-28 00:15:04.652875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.084 [2024-11-28 00:15:04.652884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:50.084 [2024-11-28 00:15:04.652891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.084 [2024-11-28 00:15:04.652908] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.084 [2024-11-28 00:15:04.652926] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:50.084 [2024-11-28 00:15:04.652957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.084 [2024-11-28 00:15:04.652974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:50.085 [2024-11-28 00:15:04.653048] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:50.085 [2024-11-28 00:15:04.653060] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.085 [2024-11-28 00:15:04.653072] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:50.085 [2024-11-28 00:15:04.653084] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653092] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653100] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.085 [2024-11-28 00:15:04.653107] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.085 [2024-11-28 00:15:04.653113] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:50.085 [2024-11-28 00:15:04.653119] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:50.085 [2024-11-28 00:15:04.653129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.085 [2024-11-28 00:15:04.653136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.085 [2024-11-28 00:15:04.653144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:50.085 [2024-11-28 00:15:04.653152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.085 [2024-11-28 00:15:04.653217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.085 [2024-11-28 00:15:04.653225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.085 [2024-11-28 00:15:04.653234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:50.085 [2024-11-28 00:15:04.653245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.085 [2024-11-28 00:15:04.653312] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.085 [2024-11-28 00:15:04.653321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.085 [2024-11-28 00:15:04.653332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.085 [2024-11-28 00:15:04.653358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.085 [2024-11-28 00:15:04.653392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.085 [2024-11-28 00:15:04.653406] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.085 [2024-11-28 00:15:04.653412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.085 [2024-11-28 00:15:04.653418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.085 [2024-11-28 00:15:04.653430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.085 [2024-11-28 00:15:04.653439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:50.085 [2024-11-28 00:15:04.653446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.085 [2024-11-28 00:15:04.653461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:50.085 [2024-11-28 00:15:04.653468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:50.085 [2024-11-28 00:15:04.653483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:50.085 [2024-11-28 00:15:04.653491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.085 [2024-11-28 00:15:04.653506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.085 [2024-11-28 00:15:04.653527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.085 [2024-11-28 00:15:04.653548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.085 [2024-11-28 00:15:04.653573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.085 [2024-11-28 00:15:04.653594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.085 [2024-11-28 00:15:04.653608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.085 [2024-11-28 00:15:04.653616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:50.085 [2024-11-28 00:15:04.653623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.085 [2024-11-28 00:15:04.653631] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.085 [2024-11-28 00:15:04.653642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.085 [2024-11-28 00:15:04.653650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.085 [2024-11-28 00:15:04.653665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.085 [2024-11-28 00:15:04.653673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.085 [2024-11-28 00:15:04.653682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.085 [2024-11-28 00:15:04.653690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.085 [2024-11-28 00:15:04.653697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.085 [2024-11-28 00:15:04.653704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.085 [2024-11-28 00:15:04.653712] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.085 [2024-11-28 00:15:04.653722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.085 [2024-11-28 00:15:04.653731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.085 [2024-11-28 00:15:04.653740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:50.085 [2024-11-28 00:15:04.653748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:50.085 [2024-11-28 00:15:04.653755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:50.085 [2024-11-28 00:15:04.653763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:50.085 [2024-11-28 00:15:04.653771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:50.085 [2024-11-28 00:15:04.653778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:50.085 [2024-11-28 00:15:04.653786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:50.085 [2024-11-28 00:15:04.653794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:50.085 [2024-11-28 00:15:04.653802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:50.085 [2024-11-28 00:15:04.653812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:50.085 [2024-11-28 00:15:04.653820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:50.085 [2024-11-28 00:15:04.653828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:50.085 [2024-11-28 00:15:04.653835] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.085 [2024-11-28 00:15:04.653844] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.085 [2024-11-28 00:15:04.653852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.085 [2024-11-28 00:15:04.653861] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.085 [2024-11-28 00:15:04.653869] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.085 [2024-11-28 00:15:04.653877] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.085 [2024-11-28 00:15:04.653885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.085 [2024-11-28 00:15:04.653893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.085 [2024-11-28 00:15:04.653901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:20:50.085 [2024-11-28 00:15:04.653911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.085 [2024-11-28 00:15:04.659793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.659821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.086 [2024-11-28 00:15:04.659831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.846 ms 00:20:50.086 [2024-11-28 00:15:04.659837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.086 [2024-11-28 00:15:04.659917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.659925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.086 [2024-11-28 00:15:04.659932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:50.086 [2024-11-28 00:15:04.659940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.086 [2024-11-28 00:15:04.679382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.679423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.086 [2024-11-28 00:15:04.679439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.409 ms 00:20:50.086 [2024-11-28 00:15:04.679447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.086 [2024-11-28 00:15:04.679489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.679499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.086 [2024-11-28 00:15:04.679514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.086 [2024-11-28 00:15:04.679524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.086 [2024-11-28 00:15:04.679874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.679904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.086 [2024-11-28 00:15:04.679914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:50.086 [2024-11-28 00:15:04.679928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.086 [2024-11-28 00:15:04.680052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.086 [2024-11-28 00:15:04.680066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.086 [2024-11-28 00:15:04.680080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:50.086 [2024-11-28 00:15:04.680089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.685553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.685590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.346 [2024-11-28 00:15:04.685599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.439 ms 00:20:50.346 [2024-11-28 00:15:04.685607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.688258] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:50.346 [2024-11-28 00:15:04.688295] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.346 [2024-11-28 00:15:04.688306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.688314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.346 [2024-11-28 00:15:04.688322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:20:50.346 [2024-11-28 00:15:04.688330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.703314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.703354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.346 [2024-11-28 00:15:04.703370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.934 ms 00:20:50.346 [2024-11-28 00:15:04.703378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.705356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.705399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.346 [2024-11-28 00:15:04.705407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:20:50.346 [2024-11-28 00:15:04.705414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.707357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.707406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.346 [2024-11-28 00:15:04.707416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:20:50.346 [2024-11-28 00:15:04.707424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.707632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.346 [2024-11-28 00:15:04.707644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.346 [2024-11-28 00:15:04.707652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:50.346 [2024-11-28 00:15:04.707662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.346 [2024-11-28 00:15:04.725836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.725871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.347 [2024-11-28 00:15:04.725882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.156 ms 00:20:50.347 [2024-11-28 00:15:04.725889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.733094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.347 [2024-11-28 00:15:04.735232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.735260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.347 [2024-11-28 00:15:04.735271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.308 ms 00:20:50.347 [2024-11-28 00:15:04.735278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.735331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.735341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.347 [2024-11-28 00:15:04.735357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.347 [2024-11-28 00:15:04.735377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.735918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.735945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.347 [2024-11-28 00:15:04.735954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:20:50.347 [2024-11-28 00:15:04.735962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.737247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.737274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:50.347 [2024-11-28 00:15:04.737282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:20:50.347 [2024-11-28 00:15:04.737294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.737330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.737339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.347 [2024-11-28 00:15:04.737350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.347 [2024-11-28 00:15:04.737356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.737405] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.347 [2024-11-28 00:15:04.737416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.737425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.347 [2024-11-28 00:15:04.737433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.347 [2024-11-28 00:15:04.737443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.741130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.741160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.347 [2024-11-28 00:15:04.741169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:20:50.347 [2024-11-28 00:15:04.741197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.741256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.347 [2024-11-28 00:15:04.741265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.347 [2024-11-28 00:15:04.741277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:50.347 [2024-11-28 00:15:04.741284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.347 [2024-11-28 00:15:04.742135] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.388 ms, result 0 00:20:51.724  [2024-11-28T00:15:07.276Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-28T00:15:08.210Z] Copying: 30/1024 [MB] (14 MBps) [2024-11-28T00:15:09.145Z] Copying: 57/1024 [MB] (26 MBps) [2024-11-28T00:15:10.081Z] Copying: 81/1024 [MB] (24 MBps) [2024-11-28T00:15:11.013Z] Copying: 103/1024 [MB] (21 MBps) [2024-11-28T00:15:11.946Z] Copying: 119/1024 [MB] (16 MBps) [2024-11-28T00:15:13.320Z] Copying: 134/1024 [MB] (14 MBps) [2024-11-28T00:15:14.256Z] Copying: 148/1024 [MB] (14 MBps) [2024-11-28T00:15:15.190Z] Copying: 163/1024 [MB] (15 MBps) [2024-11-28T00:15:16.121Z] Copying: 180/1024 [MB] (17 MBps) [2024-11-28T00:15:17.054Z] Copying: 195/1024 [MB] (14 MBps) [2024-11-28T00:15:17.988Z] Copying: 217/1024 [MB] (21 MBps) [2024-11-28T00:15:18.922Z] Copying: 231/1024 [MB] (14 MBps) [2024-11-28T00:15:20.322Z] Copying: 246/1024 [MB] (14 MBps) [2024-11-28T00:15:21.253Z] Copying: 260/1024 [MB] (13 MBps) [2024-11-28T00:15:22.207Z] Copying: 280/1024 [MB] (20 MBps) [2024-11-28T00:15:23.182Z] Copying: 294/1024 [MB] (13 MBps) [2024-11-28T00:15:24.118Z] Copying: 308/1024 [MB] (14 MBps) [2024-11-28T00:15:25.053Z] Copying: 325/1024 [MB] (16 MBps) [2024-11-28T00:15:25.989Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-28T00:15:26.925Z] Copying: 366/1024 [MB] (23 MBps) [2024-11-28T00:15:28.301Z] Copying: 390/1024 [MB] (24 MBps) [2024-11-28T00:15:29.236Z] Copying: 417/1024 [MB] (26 MBps) [2024-11-28T00:15:30.171Z] Copying: 439/1024 [MB] (22 MBps) [2024-11-28T00:15:31.105Z] Copying: 467/1024 [MB] (27 MBps) [2024-11-28T00:15:32.040Z] Copying: 494/1024 [MB] (27 MBps) [2024-11-28T00:15:32.974Z] Copying: 510/1024 [MB] (16 MBps) [2024-11-28T00:15:34.348Z] Copying: 531/1024 [MB] (21 MBps) [2024-11-28T00:15:34.914Z] Copying: 553/1024 [MB] (21 MBps) [2024-11-28T00:15:36.287Z] Copying: 581/1024 [MB] (27 MBps) [2024-11-28T00:15:37.221Z] Copying: 610/1024 [MB] (29 MBps) [2024-11-28T00:15:38.173Z] Copying: 636/1024 [MB] (25 MBps) [2024-11-28T00:15:39.107Z] Copying: 660/1024 [MB] (23 MBps) [2024-11-28T00:15:40.068Z] Copying: 687/1024 [MB] (26 MBps) [2024-11-28T00:15:41.001Z] Copying: 704/1024 [MB] (17 MBps) [2024-11-28T00:15:41.934Z] Copying: 719/1024 [MB] (14 MBps) [2024-11-28T00:15:43.308Z] Copying: 734/1024 [MB] (14 MBps) [2024-11-28T00:15:44.242Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-28T00:15:45.174Z] Copying: 765/1024 [MB] (16 MBps) [2024-11-28T00:15:46.106Z] Copying: 780/1024 [MB] (14 MBps) [2024-11-28T00:15:47.041Z] Copying: 794/1024 [MB] (14 MBps) [2024-11-28T00:15:47.974Z] Copying: 808/1024 [MB] (13 MBps) [2024-11-28T00:15:49.348Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-28T00:15:49.914Z] Copying: 843/1024 [MB] (14 MBps) [2024-11-28T00:15:51.288Z] Copying: 857/1024 [MB] (14 MBps) [2024-11-28T00:15:52.223Z] Copying: 870/1024 [MB] (12 MBps) [2024-11-28T00:15:53.158Z] Copying: 882/1024 [MB] (11 MBps) [2024-11-28T00:15:54.090Z] Copying: 895/1024 [MB] (12 MBps) [2024-11-28T00:15:55.025Z] Copying: 913/1024 [MB] (17 MBps) [2024-11-28T00:15:56.058Z] Copying: 926/1024 [MB] (13 MBps) [2024-11-28T00:15:56.993Z] Copying: 939/1024 [MB] (13 MBps) [2024-11-28T00:15:57.929Z] Copying: 954/1024 [MB] (14 MBps) [2024-11-28T00:15:59.305Z] Copying: 967/1024 [MB] (13 MBps) [2024-11-28T00:16:00.241Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-28T00:16:01.183Z] Copying: 995/1024 [MB] (14 MBps) [2024-11-28T00:16:02.117Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-28T00:16:02.117Z] Copying: 1022/1024 [MB] (13 MBps) [2024-11-28T00:16:02.378Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-28 00:16:02.270052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.270113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:47.776 [2024-11-28 00:16:02.270126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:47.776 [2024-11-28 00:16:02.270134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.270155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:47.776 [2024-11-28 00:16:02.270646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.270664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:47.776 [2024-11-28 00:16:02.270673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:21:47.776 [2024-11-28 00:16:02.270680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.270906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.270916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:47.776 [2024-11-28 00:16:02.272472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:47.776 [2024-11-28 00:16:02.272487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.276605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.276635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:47.776 [2024-11-28 00:16:02.276647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:21:47.776 [2024-11-28 00:16:02.276655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.282775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.282801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:47.776 [2024-11-28 00:16:02.282816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.098 ms 00:21:47.776 [2024-11-28 00:16:02.282823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.285308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.285339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:47.776 [2024-11-28 00:16:02.285349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:21:47.776 [2024-11-28 00:16:02.285355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.288848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.288886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:47.776 [2024-11-28 00:16:02.288895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:21:47.776 [2024-11-28 00:16:02.288902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.297304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.297336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:47.776 [2024-11-28 00:16:02.297345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.369 ms 00:21:47.776 [2024-11-28 00:16:02.297352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.299870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.299996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:47.776 [2024-11-28 00:16:02.300011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:21:47.776 [2024-11-28 00:16:02.300017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.302124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.302154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:47.776 [2024-11-28 00:16:02.302163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:21:47.776 [2024-11-28 00:16:02.302169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.303772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.303802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:47.776 [2024-11-28 00:16:02.303811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:21:47.776 [2024-11-28 00:16:02.303817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.305173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.776 [2024-11-28 00:16:02.305214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:47.776 [2024-11-28 00:16:02.305222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:21:47.776 [2024-11-28 00:16:02.305229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.776 [2024-11-28 00:16:02.305256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:47.776 [2024-11-28 00:16:02.305269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:47.776 [2024-11-28 00:16:02.305286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:21:47.776 [2024-11-28 00:16:02.305294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:47.776 [2024-11-28 00:16:02.305921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.305992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.306000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.306007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.306014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:47.777 [2024-11-28 00:16:02.306029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:47.777 [2024-11-28 00:16:02.306036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e292497b-eef2-460b-acbc-ebae96479fae 00:21:47.777 [2024-11-28 00:16:02.306044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:21:47.777 [2024-11-28 00:16:02.306051] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:47.777 [2024-11-28 00:16:02.306063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:47.777 [2024-11-28 00:16:02.306070] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:47.777 [2024-11-28 00:16:02.306077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:47.777 [2024-11-28 00:16:02.306084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:47.777 [2024-11-28 00:16:02.306092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:47.777 [2024-11-28 00:16:02.306098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:47.777 [2024-11-28 00:16:02.306105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:47.777 [2024-11-28 00:16:02.306111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.777 [2024-11-28 00:16:02.306118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:47.777 [2024-11-28 00:16:02.306127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:21:47.777 [2024-11-28 00:16:02.306136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.307542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.777 [2024-11-28 00:16:02.307562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:47.777 [2024-11-28 00:16:02.307570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:21:47.777 [2024-11-28 00:16:02.307577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.307635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.777 [2024-11-28 00:16:02.307646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:47.777 [2024-11-28 00:16:02.307654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:47.777 [2024-11-28 00:16:02.307661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.312629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.312651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.777 [2024-11-28 00:16:02.312659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.312667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.312712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.312725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.777 [2024-11-28 00:16:02.312732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.312739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.312787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.312796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.777 [2024-11-28 00:16:02.312809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.312816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.312830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.312837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.777 [2024-11-28 00:16:02.312850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.312857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.320989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.321024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.777 [2024-11-28 00:16:02.321033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.321041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:47.777 [2024-11-28 00:16:02.324668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.777 [2024-11-28 00:16:02.324739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.777 [2024-11-28 00:16:02.324784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.777 [2024-11-28 00:16:02.324873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:47.777 [2024-11-28 00:16:02.324925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.324965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.324973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.777 [2024-11-28 00:16:02.324980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.324988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.325024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.777 [2024-11-28 00:16:02.325033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.777 [2024-11-28 00:16:02.325040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.777 [2024-11-28 00:16:02.325050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.777 [2024-11-28 00:16:02.325160] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.088 ms, result 0 00:21:48.037 00:21:48.037 00:21:48.037 00:16:02 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:50.570 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@37 -- # killprocess 85699 00:21:50.570 00:16:04 -- common/autotest_common.sh@936 -- # '[' -z 85699 ']' 00:21:50.570 00:16:04 -- common/autotest_common.sh@940 -- # kill -0 85699 00:21:50.570 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85699) - No such process 00:21:50.570 Process with pid 85699 is not found 00:21:50.570 00:16:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85699 is not found' 00:21:50.570 00:16:04 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:21:50.570 Remove shared memory files 00:21:50.570 00:16:05 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:21:50.570 00:16:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:50.570 00:16:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:50.570 00:16:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:50.570 00:16:05 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:50.570 00:16:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:50.570 00:16:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:50.570 ************************************ 00:21:50.570 END TEST ftl_dirty_shutdown 00:21:50.570 ************************************ 00:21:50.570 00:21:50.570 real 3m2.408s 00:21:50.570 user 3m18.053s 00:21:50.570 sys 0m22.094s 00:21:50.570 00:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:50.570 00:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 00:16:05 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:21:50.829 00:16:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:50.829 00:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 ************************************ 00:21:50.829 START TEST ftl_upgrade_shutdown 00:21:50.829 ************************************ 00:21:50.829 00:16:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:21:50.829 * Looking for test storage... 00:21:50.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.829 00:16:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:50.829 00:16:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:50.829 00:16:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:50.829 00:16:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:50.829 00:16:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:50.829 00:16:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:50.829 00:16:05 -- scripts/common.sh@335 -- # IFS=.-: 00:21:50.829 00:16:05 -- scripts/common.sh@335 -- # read -ra ver1 00:21:50.829 00:16:05 -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.829 00:16:05 -- scripts/common.sh@336 -- # read -ra ver2 00:21:50.829 00:16:05 -- scripts/common.sh@337 -- # local 'op=<' 00:21:50.829 00:16:05 -- scripts/common.sh@339 -- # ver1_l=2 00:21:50.829 00:16:05 -- scripts/common.sh@340 -- # ver2_l=1 00:21:50.829 00:16:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:50.829 00:16:05 -- scripts/common.sh@343 -- # case "$op" in 00:21:50.829 00:16:05 -- scripts/common.sh@344 -- # : 1 00:21:50.829 00:16:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:50.829 00:16:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.829 00:16:05 -- scripts/common.sh@364 -- # decimal 1 00:21:50.829 00:16:05 -- scripts/common.sh@352 -- # local d=1 00:21:50.829 00:16:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.829 00:16:05 -- scripts/common.sh@354 -- # echo 1 00:21:50.829 00:16:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:50.829 00:16:05 -- scripts/common.sh@365 -- # decimal 2 00:21:50.829 00:16:05 -- scripts/common.sh@352 -- # local d=2 00:21:50.829 00:16:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.829 00:16:05 -- scripts/common.sh@354 -- # echo 2 00:21:50.829 00:16:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:50.829 00:16:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:50.829 00:16:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:50.829 00:16:05 -- scripts/common.sh@367 -- # return 0 00:21:50.829 00:16:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.829 --rc genhtml_branch_coverage=1 00:21:50.829 --rc genhtml_function_coverage=1 00:21:50.829 --rc genhtml_legend=1 00:21:50.829 --rc geninfo_all_blocks=1 00:21:50.829 --rc geninfo_unexecuted_blocks=1 00:21:50.829 00:21:50.829 ' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.829 --rc genhtml_branch_coverage=1 00:21:50.829 --rc genhtml_function_coverage=1 00:21:50.829 --rc genhtml_legend=1 00:21:50.829 --rc geninfo_all_blocks=1 00:21:50.829 --rc geninfo_unexecuted_blocks=1 00:21:50.829 00:21:50.829 ' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.829 --rc genhtml_branch_coverage=1 00:21:50.829 --rc genhtml_function_coverage=1 00:21:50.829 --rc genhtml_legend=1 00:21:50.829 --rc geninfo_all_blocks=1 00:21:50.829 --rc geninfo_unexecuted_blocks=1 00:21:50.829 00:21:50.829 ' 00:21:50.829 00:16:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.829 --rc genhtml_branch_coverage=1 00:21:50.829 --rc genhtml_function_coverage=1 00:21:50.829 --rc genhtml_legend=1 00:21:50.829 --rc geninfo_all_blocks=1 00:21:50.829 --rc geninfo_unexecuted_blocks=1 00:21:50.829 00:21:50.829 ' 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:50.829 00:16:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:21:50.829 00:16:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.829 00:16:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:50.829 00:16:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:50.829 00:16:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:50.829 00:16:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:50.829 00:16:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:50.829 00:16:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:50.829 00:16:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.829 00:16:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.829 00:16:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:50.829 00:16:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:50.829 00:16:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:50.829 00:16:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:50.829 00:16:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:50.829 00:16:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:50.829 00:16:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.829 00:16:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.829 00:16:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:50.829 00:16:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:50.829 00:16:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:50.829 00:16:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:50.829 00:16:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:50.829 00:16:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:50.829 00:16:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:50.829 00:16:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:50.829 00:16:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:50.829 00:16:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:21:50.829 00:16:05 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:21:50.829 00:16:05 -- ftl/common.sh@81 -- # local base_bdev= 00:21:50.829 00:16:05 -- ftl/common.sh@82 -- # local cache_bdev= 00:21:50.829 00:16:05 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:50.829 00:16:05 -- ftl/common.sh@89 -- # spdk_tgt_pid=87788 00:21:50.829 00:16:05 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:50.829 00:16:05 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:21:50.829 00:16:05 -- ftl/common.sh@91 -- # waitforlisten 87788 00:21:50.829 00:16:05 -- common/autotest_common.sh@829 -- # '[' -z 87788 ']' 00:21:50.829 00:16:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.829 00:16:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.829 00:16:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.829 00:16:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.829 00:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:51.087 [2024-11-28 00:16:05.440010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:51.087 [2024-11-28 00:16:05.440220] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87788 ] 00:21:51.088 [2024-11-28 00:16:05.586406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.088 [2024-11-28 00:16:05.617236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:51.088 [2024-11-28 00:16:05.617595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.655 00:16:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.655 00:16:06 -- common/autotest_common.sh@862 -- # return 0 00:21:51.655 00:16:06 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:51.655 00:16:06 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:21:51.655 00:16:06 -- ftl/common.sh@99 -- # local params 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:21:51.655 00:16:06 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:51.655 00:16:06 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:21:51.655 00:16:06 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:21:51.655 00:16:06 -- ftl/common.sh@54 -- # local name=base 00:21:51.655 00:16:06 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:51.655 00:16:06 -- ftl/common.sh@56 -- # local size=20480 00:21:51.655 00:16:06 -- ftl/common.sh@59 -- # local base_bdev 00:21:51.655 00:16:06 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:21:52.222 00:16:06 -- ftl/common.sh@60 -- # base_bdev=basen1 00:21:52.222 00:16:06 -- ftl/common.sh@62 -- # local base_size 00:21:52.222 00:16:06 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:21:52.222 00:16:06 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:21:52.222 00:16:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:52.222 00:16:06 -- common/autotest_common.sh@1369 -- # local bs 00:21:52.222 00:16:06 -- common/autotest_common.sh@1370 -- # local nb 00:21:52.222 00:16:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:21:52.222 00:16:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:52.222 { 00:21:52.222 "name": "basen1", 00:21:52.222 "aliases": [ 00:21:52.222 "d6fd3340-f4df-485b-9add-689743d2391a" 00:21:52.222 ], 00:21:52.222 "product_name": "NVMe disk", 00:21:52.222 "block_size": 4096, 00:21:52.222 "num_blocks": 1310720, 00:21:52.222 "uuid": "d6fd3340-f4df-485b-9add-689743d2391a", 00:21:52.222 "assigned_rate_limits": { 00:21:52.222 "rw_ios_per_sec": 0, 00:21:52.222 "rw_mbytes_per_sec": 0, 00:21:52.222 "r_mbytes_per_sec": 0, 00:21:52.222 "w_mbytes_per_sec": 0 00:21:52.222 }, 00:21:52.222 "claimed": true, 00:21:52.222 "claim_type": "read_many_write_one", 00:21:52.222 "zoned": false, 00:21:52.222 "supported_io_types": { 00:21:52.222 "read": true, 00:21:52.222 "write": true, 00:21:52.222 "unmap": true, 00:21:52.222 "write_zeroes": true, 00:21:52.222 "flush": true, 00:21:52.222 "reset": true, 00:21:52.222 "compare": true, 00:21:52.222 "compare_and_write": false, 00:21:52.222 "abort": true, 00:21:52.222 "nvme_admin": true, 00:21:52.222 "nvme_io": true 00:21:52.222 }, 00:21:52.222 "driver_specific": { 00:21:52.222 "nvme": [ 00:21:52.222 { 00:21:52.222 "pci_address": "0000:00:07.0", 00:21:52.223 "trid": { 00:21:52.223 "trtype": "PCIe", 00:21:52.223 "traddr": "0000:00:07.0" 00:21:52.223 }, 00:21:52.223 "ctrlr_data": { 00:21:52.223 "cntlid": 0, 00:21:52.223 "vendor_id": "0x1b36", 00:21:52.223 "model_number": "QEMU NVMe Ctrl", 00:21:52.223 "serial_number": "12341", 00:21:52.223 "firmware_revision": "8.0.0", 00:21:52.223 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:52.223 "oacs": { 00:21:52.223 "security": 0, 00:21:52.223 "format": 1, 00:21:52.223 "firmware": 0, 00:21:52.223 "ns_manage": 1 00:21:52.223 }, 00:21:52.223 "multi_ctrlr": false, 00:21:52.223 "ana_reporting": false 00:21:52.223 }, 00:21:52.223 "vs": { 00:21:52.223 "nvme_version": "1.4" 00:21:52.223 }, 00:21:52.223 "ns_data": { 00:21:52.223 "id": 1, 00:21:52.223 "can_share": false 00:21:52.223 } 00:21:52.223 } 00:21:52.223 ], 00:21:52.223 "mp_policy": "active_passive" 00:21:52.223 } 00:21:52.223 } 00:21:52.223 ]' 00:21:52.223 00:16:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:52.223 00:16:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:52.223 00:16:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:52.223 00:16:06 -- common/autotest_common.sh@1373 -- # nb=1310720 00:21:52.223 00:16:06 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:21:52.223 00:16:06 -- common/autotest_common.sh@1377 -- # echo 5120 00:21:52.223 00:16:06 -- ftl/common.sh@63 -- # base_size=5120 00:21:52.223 00:16:06 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:21:52.223 00:16:06 -- ftl/common.sh@67 -- # clear_lvols 00:21:52.223 00:16:06 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.223 00:16:06 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:52.482 00:16:06 -- ftl/common.sh@28 -- # stores=b009675b-d06f-47bb-be17-e74d747cd05c 00:21:52.482 00:16:06 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:52.482 00:16:06 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b009675b-d06f-47bb-be17-e74d747cd05c 00:21:52.740 00:16:07 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:21:52.998 00:16:07 -- ftl/common.sh@68 -- # lvs=19849a54-ba3e-4c6a-8b42-0e6c510257ae 00:21:52.998 00:16:07 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 19849a54-ba3e-4c6a-8b42-0e6c510257ae 00:21:52.998 00:16:07 -- ftl/common.sh@107 -- # base_bdev=1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 00:21:52.998 00:16:07 -- ftl/common.sh@108 -- # [[ -z 1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 ]] 00:21:52.998 00:16:07 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 5120 00:21:52.998 00:16:07 -- ftl/common.sh@35 -- # local name=cache 00:21:52.998 00:16:07 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:52.998 00:16:07 -- ftl/common.sh@37 -- # local base_bdev=1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 00:21:52.998 00:16:07 -- ftl/common.sh@38 -- # local cache_size=5120 00:21:52.998 00:16:07 -- ftl/common.sh@41 -- # get_bdev_size 1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 00:21:52.998 00:16:07 -- common/autotest_common.sh@1367 -- # local bdev_name=1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 00:21:52.998 00:16:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:52.998 00:16:07 -- common/autotest_common.sh@1369 -- # local bs 00:21:52.998 00:16:07 -- common/autotest_common.sh@1370 -- # local nb 00:21:52.998 00:16:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 00:21:53.257 00:16:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:53.257 { 00:21:53.257 "name": "1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3", 00:21:53.257 "aliases": [ 00:21:53.257 "lvs/basen1p0" 00:21:53.257 ], 00:21:53.257 "product_name": "Logical Volume", 00:21:53.257 "block_size": 4096, 00:21:53.257 "num_blocks": 5242880, 00:21:53.257 "uuid": "1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3", 00:21:53.257 "assigned_rate_limits": { 00:21:53.257 "rw_ios_per_sec": 0, 00:21:53.257 "rw_mbytes_per_sec": 0, 00:21:53.257 "r_mbytes_per_sec": 0, 00:21:53.257 "w_mbytes_per_sec": 0 00:21:53.257 }, 00:21:53.257 "claimed": false, 00:21:53.257 "zoned": false, 00:21:53.257 "supported_io_types": { 00:21:53.257 "read": true, 00:21:53.257 "write": true, 00:21:53.257 "unmap": true, 00:21:53.257 "write_zeroes": true, 00:21:53.257 "flush": false, 00:21:53.257 "reset": true, 00:21:53.257 "compare": false, 00:21:53.257 "compare_and_write": false, 00:21:53.257 "abort": false, 00:21:53.257 "nvme_admin": false, 00:21:53.257 "nvme_io": false 00:21:53.257 }, 00:21:53.257 "driver_specific": { 00:21:53.257 "lvol": { 00:21:53.257 "lvol_store_uuid": "19849a54-ba3e-4c6a-8b42-0e6c510257ae", 00:21:53.257 "base_bdev": "basen1", 00:21:53.257 "thin_provision": true, 00:21:53.257 "snapshot": false, 00:21:53.257 "clone": false, 00:21:53.257 "esnap_clone": false 00:21:53.257 } 00:21:53.257 } 00:21:53.257 } 00:21:53.257 ]' 00:21:53.257 00:16:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:53.257 00:16:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:53.257 00:16:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:53.257 00:16:07 -- common/autotest_common.sh@1373 -- # nb=5242880 00:21:53.257 00:16:07 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:21:53.257 00:16:07 -- common/autotest_common.sh@1377 -- # echo 20480 00:21:53.257 00:16:07 -- ftl/common.sh@41 -- # local base_size=1024 00:21:53.257 00:16:07 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:53.257 00:16:07 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:21:53.516 00:16:08 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:21:53.516 00:16:08 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:21:53.516 00:16:08 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:21:53.775 00:16:08 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:21:53.775 00:16:08 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:21:53.775 00:16:08 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1a41df59-f92f-4b7a-b7a7-df08c2f7c9a3 -c cachen1p0 --l2p_dram_limit 2 00:21:54.035 [2024-11-28 00:16:08.397029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.397225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:21:54.035 [2024-11-28 00:16:08.397292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:21:54.035 [2024-11-28 00:16:08.397313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.397394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.397416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:54.035 [2024-11-28 00:16:08.397435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:21:54.035 [2024-11-28 00:16:08.397454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.397483] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:21:54.035 [2024-11-28 00:16:08.397762] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:21:54.035 [2024-11-28 00:16:08.397971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.397988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:54.035 [2024-11-28 00:16:08.398006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.493 ms 00:21:54.035 [2024-11-28 00:16:08.398020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.398084] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e11b55f6-8bf5-429b-9c26-2ec0793e0512 00:21:54.035 [2024-11-28 00:16:08.399029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.399129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:21:54.035 [2024-11-28 00:16:08.399178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:21:54.035 [2024-11-28 00:16:08.399190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.403836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.403868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:54.035 [2024-11-28 00:16:08.403875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.611 ms 00:21:54.035 [2024-11-28 00:16:08.403884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.403912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.403920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:54.035 [2024-11-28 00:16:08.403926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:21:54.035 [2024-11-28 00:16:08.403933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.403968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.403977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:21:54.035 [2024-11-28 00:16:08.403985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:54.035 [2024-11-28 00:16:08.403992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.404010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:21:54.035 [2024-11-28 00:16:08.405284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.405308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:54.035 [2024-11-28 00:16:08.405316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.279 ms 00:21:54.035 [2024-11-28 00:16:08.405323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.405345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.405351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:21:54.035 [2024-11-28 00:16:08.405371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:54.035 [2024-11-28 00:16:08.405377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.405391] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:21:54.035 [2024-11-28 00:16:08.405482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:21:54.035 [2024-11-28 00:16:08.405494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:21:54.035 [2024-11-28 00:16:08.405503] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:21:54.035 [2024-11-28 00:16:08.405512] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:21:54.035 [2024-11-28 00:16:08.405519] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:21:54.035 [2024-11-28 00:16:08.405526] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:21:54.035 [2024-11-28 00:16:08.405534] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:21:54.035 [2024-11-28 00:16:08.405544] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:21:54.035 [2024-11-28 00:16:08.405549] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:21:54.035 [2024-11-28 00:16:08.405556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.405562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:21:54.035 [2024-11-28 00:16:08.405569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:21:54.035 [2024-11-28 00:16:08.405574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.405625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.035 [2024-11-28 00:16:08.405631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:21:54.035 [2024-11-28 00:16:08.405639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:21:54.035 [2024-11-28 00:16:08.405644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.035 [2024-11-28 00:16:08.405699] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:21:54.035 [2024-11-28 00:16:08.405705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:21:54.035 [2024-11-28 00:16:08.405717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:54.035 [2024-11-28 00:16:08.405722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:21:54.035 [2024-11-28 00:16:08.405735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:21:54.035 [2024-11-28 00:16:08.405746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:21:54.035 [2024-11-28 00:16:08.405752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:21:54.035 [2024-11-28 00:16:08.405757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:21:54.035 [2024-11-28 00:16:08.405769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:21:54.035 [2024-11-28 00:16:08.405776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:21:54.035 [2024-11-28 00:16:08.405788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:21:54.035 [2024-11-28 00:16:08.405804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:21:54.035 [2024-11-28 00:16:08.405810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.035 [2024-11-28 00:16:08.405815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:21:54.035 [2024-11-28 00:16:08.405821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:21:54.035 [2024-11-28 00:16:08.405827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:21:54.035 [2024-11-28 00:16:08.405834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:21:54.035 [2024-11-28 00:16:08.405839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:21:54.035 [2024-11-28 00:16:08.405845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:21:54.036 [2024-11-28 00:16:08.405856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:21:54.036 [2024-11-28 00:16:08.405860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:21:54.036 [2024-11-28 00:16:08.405874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:21:54.036 [2024-11-28 00:16:08.405880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:21:54.036 [2024-11-28 00:16:08.405890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:21:54.036 [2024-11-28 00:16:08.405895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:21:54.036 [2024-11-28 00:16:08.405906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:21:54.036 [2024-11-28 00:16:08.405912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.036 [2024-11-28 00:16:08.405917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:21:54.036 [2024-11-28 00:16:08.405923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.036 [2024-11-28 00:16:08.405935] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:21:54.036 [2024-11-28 00:16:08.405942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:21:54.036 [2024-11-28 00:16:08.405950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:54.036 [2024-11-28 00:16:08.405956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:54.036 [2024-11-28 00:16:08.405968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:21:54.036 [2024-11-28 00:16:08.405974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:21:54.036 [2024-11-28 00:16:08.405981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:21:54.036 [2024-11-28 00:16:08.405987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:21:54.036 [2024-11-28 00:16:08.405993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:21:54.036 [2024-11-28 00:16:08.405999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:21:54.036 [2024-11-28 00:16:08.406009] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:21:54.036 [2024-11-28 00:16:08.406017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:21:54.036 [2024-11-28 00:16:08.406033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:21:54.036 [2024-11-28 00:16:08.406054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:21:54.036 [2024-11-28 00:16:08.406060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:21:54.036 [2024-11-28 00:16:08.406067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:21:54.036 [2024-11-28 00:16:08.406073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:21:54.036 [2024-11-28 00:16:08.406109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:21:54.036 [2024-11-28 00:16:08.406115] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:21:54.036 [2024-11-28 00:16:08.406123] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406129] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.036 [2024-11-28 00:16:08.406142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:21:54.036 [2024-11-28 00:16:08.406148] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:21:54.036 [2024-11-28 00:16:08.406155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:21:54.036 [2024-11-28 00:16:08.406161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.406169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:21:54.036 [2024-11-28 00:16:08.406175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 00:21:54.036 [2024-11-28 00:16:08.406182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.411184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.411215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:54.036 [2024-11-28 00:16:08.411222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.969 ms 00:21:54.036 [2024-11-28 00:16:08.411229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.411259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.411266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:21:54.036 [2024-11-28 00:16:08.411272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:21:54.036 [2024-11-28 00:16:08.411279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.418830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.418860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:54.036 [2024-11-28 00:16:08.418871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.521 ms 00:21:54.036 [2024-11-28 00:16:08.418878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.418898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.418905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:54.036 [2024-11-28 00:16:08.418911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:54.036 [2024-11-28 00:16:08.418918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.419208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.419229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:54.036 [2024-11-28 00:16:08.419237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:21:54.036 [2024-11-28 00:16:08.419243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.419277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.419285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:54.036 [2024-11-28 00:16:08.419291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:21:54.036 [2024-11-28 00:16:08.419298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.423964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.424073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:54.036 [2024-11-28 00:16:08.424084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.653 ms 00:21:54.036 [2024-11-28 00:16:08.424092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.430740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:21:54.036 [2024-11-28 00:16:08.431519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.431596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:21:54.036 [2024-11-28 00:16:08.431638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.374 ms 00:21:54.036 [2024-11-28 00:16:08.431657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.446036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:54.036 [2024-11-28 00:16:08.446139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:21:54.036 [2024-11-28 00:16:08.446185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.346 ms 00:21:54.036 [2024-11-28 00:16:08.446203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:54.036 [2024-11-28 00:16:08.446232] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:21:54.036 [2024-11-28 00:16:08.446258] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:21:56.627 [2024-11-28 00:16:11.159570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.159742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:21:56.627 [2024-11-28 00:16:11.159844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2713.321 ms 00:21:56.627 [2024-11-28 00:16:11.159869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.160016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.160077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:21:56.627 [2024-11-28 00:16:11.160129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:21:56.627 [2024-11-28 00:16:11.160152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.162732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.162849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:21:56.627 [2024-11-28 00:16:11.162912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.470 ms 00:21:56.627 [2024-11-28 00:16:11.162935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.165393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.165498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:21:56.627 [2024-11-28 00:16:11.165554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.377 ms 00:21:56.627 [2024-11-28 00:16:11.165609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.165791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.165853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:21:56.627 [2024-11-28 00:16:11.165905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.139 ms 00:21:56.627 [2024-11-28 00:16:11.165926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.188343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.188484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:21:56.627 [2024-11-28 00:16:11.188535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.375 ms 00:21:56.627 [2024-11-28 00:16:11.188558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.192401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.192506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:21:56.627 [2024-11-28 00:16:11.192600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.799 ms 00:21:56.627 [2024-11-28 00:16:11.192622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.193906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.194007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:21:56.627 [2024-11-28 00:16:11.194084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.176 ms 00:21:56.627 [2024-11-28 00:16:11.194106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.196960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.197059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:21:56.627 [2024-11-28 00:16:11.197139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.792 ms 00:21:56.627 [2024-11-28 00:16:11.197162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.197216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.197397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:21:56.627 [2024-11-28 00:16:11.197425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:56.627 [2024-11-28 00:16:11.197444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.197526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:56.627 [2024-11-28 00:16:11.197552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:21:56.627 [2024-11-28 00:16:11.197576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:21:56.627 [2024-11-28 00:16:11.197594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:56.627 [2024-11-28 00:16:11.198455] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2801.043 ms, result 0 00:21:56.627 { 00:21:56.627 "name": "ftl", 00:21:56.627 "uuid": "e11b55f6-8bf5-429b-9c26-2ec0793e0512" 00:21:56.627 } 00:21:56.627 00:16:11 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:21:56.885 [2024-11-28 00:16:11.391009] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.885 00:16:11 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:21:57.144 00:16:11 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:21:57.401 [2024-11-28 00:16:11.775481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:21:57.401 00:16:11 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:21:57.401 [2024-11-28 00:16:11.955800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:57.401 00:16:11 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:57.967 Fill FTL, iteration 1 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:21:57.967 00:16:12 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:57.967 00:16:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:57.967 00:16:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:57.967 00:16:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:57.967 00:16:12 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:21:57.967 00:16:12 -- ftl/common.sh@163 -- # spdk_ini_pid=87900 00:21:57.967 00:16:12 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:21:57.967 00:16:12 -- ftl/common.sh@165 -- # waitforlisten 87900 /var/tmp/spdk.tgt.sock 00:21:57.967 00:16:12 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:21:57.967 00:16:12 -- common/autotest_common.sh@829 -- # '[' -z 87900 ']' 00:21:57.967 00:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:21:57.967 00:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:21:57.967 00:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:21:57.967 00:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.967 00:16:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.967 [2024-11-28 00:16:12.343081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:57.967 [2024-11-28 00:16:12.343347] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87900 ] 00:21:57.967 [2024-11-28 00:16:12.493213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.967 [2024-11-28 00:16:12.523739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.967 [2024-11-28 00:16:12.524082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.902 00:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.902 00:16:13 -- common/autotest_common.sh@862 -- # return 0 00:21:58.902 00:16:13 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:21:58.902 ftln1 00:21:58.902 00:16:13 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:21:58.902 00:16:13 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:21:59.160 00:16:13 -- ftl/common.sh@173 -- # echo ']}' 00:21:59.160 00:16:13 -- ftl/common.sh@176 -- # killprocess 87900 00:21:59.160 00:16:13 -- common/autotest_common.sh@936 -- # '[' -z 87900 ']' 00:21:59.160 00:16:13 -- common/autotest_common.sh@940 -- # kill -0 87900 00:21:59.160 00:16:13 -- common/autotest_common.sh@941 -- # uname 00:21:59.160 00:16:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.160 00:16:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87900 00:21:59.160 killing process with pid 87900 00:21:59.160 00:16:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:59.160 00:16:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:59.160 00:16:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87900' 00:21:59.160 00:16:13 -- common/autotest_common.sh@955 -- # kill 87900 00:21:59.160 00:16:13 -- common/autotest_common.sh@960 -- # wait 87900 00:21:59.418 00:16:13 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:21:59.419 00:16:13 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:59.419 [2024-11-28 00:16:13.910317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.419 [2024-11-28 00:16:13.910431] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87925 ] 00:21:59.676 [2024-11-28 00:16:14.058044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.676 [2024-11-28 00:16:14.087146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.054  [2024-11-28T00:16:16.591Z] Copying: 210/1024 [MB] (210 MBps) [2024-11-28T00:16:17.525Z] Copying: 445/1024 [MB] (235 MBps) [2024-11-28T00:16:18.456Z] Copying: 701/1024 [MB] (256 MBps) [2024-11-28T00:16:18.714Z] Copying: 960/1024 [MB] (259 MBps) [2024-11-28T00:16:18.714Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:22:04.112 00:22:04.112 00:16:18 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:04.112 00:16:18 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:04.112 Calculate MD5 checksum, iteration 1 00:22:04.112 00:16:18 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:04.112 00:16:18 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:04.112 00:16:18 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:04.112 00:16:18 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:04.112 00:16:18 -- ftl/common.sh@154 -- # return 0 00:22:04.112 00:16:18 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:04.371 [2024-11-28 00:16:18.733723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:04.371 [2024-11-28 00:16:18.733831] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87978 ] 00:22:04.371 [2024-11-28 00:16:18.879438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.371 [2024-11-28 00:16:18.906658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.745  [2024-11-28T00:16:20.606Z] Copying: 686/1024 [MB] (686 MBps) [2024-11-28T00:16:20.864Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:22:06.262 00:22:06.262 00:16:20 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:06.262 00:16:20 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7a6d5e28430162485bbf2a2d2cd5bb81 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:08.792 Fill FTL, iteration 2 00:22:08.792 00:16:22 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:08.792 00:16:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:08.792 00:16:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:08.792 00:16:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:08.792 00:16:22 -- ftl/common.sh@154 -- # return 0 00:22:08.792 00:16:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:08.792 [2024-11-28 00:16:22.830642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:08.792 [2024-11-28 00:16:22.830865] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88028 ] 00:22:08.792 [2024-11-28 00:16:22.975695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.792 [2024-11-28 00:16:23.002848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.728  [2024-11-28T00:16:25.265Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-28T00:16:26.199Z] Copying: 518/1024 [MB] (259 MBps) [2024-11-28T00:16:27.135Z] Copying: 781/1024 [MB] (263 MBps) [2024-11-28T00:16:27.394Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:22:12.792 00:22:12.792 Calculate MD5 checksum, iteration 2 00:22:12.792 00:16:27 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:12.792 00:16:27 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:12.792 00:16:27 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:12.792 00:16:27 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:12.792 00:16:27 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:12.792 00:16:27 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:12.792 00:16:27 -- ftl/common.sh@154 -- # return 0 00:22:12.792 00:16:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:12.792 [2024-11-28 00:16:27.322059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:12.792 [2024-11-28 00:16:27.322161] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88077 ] 00:22:13.051 [2024-11-28 00:16:27.468059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.051 [2024-11-28 00:16:27.495446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.425  [2024-11-28T00:16:29.593Z] Copying: 668/1024 [MB] (668 MBps) [2024-11-28T00:16:32.122Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:22:17.520 00:22:17.520 00:16:31 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:17.520 00:16:31 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:19.420 00:16:33 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:19.421 00:16:33 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ec5ae91a5c062a1b7d27f1758d8d5c73 00:22:19.421 00:16:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:19.421 00:16:33 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:19.421 00:16:33 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:19.421 [2024-11-28 00:16:34.012597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.421 [2024-11-28 00:16:34.012736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:19.421 [2024-11-28 00:16:34.012791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:19.421 [2024-11-28 00:16:34.012810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.421 [2024-11-28 00:16:34.012844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.421 [2024-11-28 00:16:34.012861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:19.421 [2024-11-28 00:16:34.012880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:19.421 [2024-11-28 00:16:34.012929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.421 [2024-11-28 00:16:34.012960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.421 [2024-11-28 00:16:34.012976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:19.421 [2024-11-28 00:16:34.012992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:19.421 [2024-11-28 00:16:34.013041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.421 [2024-11-28 00:16:34.013105] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.493 ms, result 0 00:22:19.421 true 00:22:19.680 00:16:34 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:19.680 { 00:22:19.680 "name": "ftl", 00:22:19.680 "properties": [ 00:22:19.680 { 00:22:19.680 "name": "superblock_version", 00:22:19.680 "value": 5, 00:22:19.680 "read-only": true 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "name": "base_device", 00:22:19.680 "bands": [ 00:22:19.680 { 00:22:19.680 "id": 0, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 1, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 2, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 3, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 4, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 5, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 6, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 7, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 8, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 9, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 10, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 11, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 12, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 13, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 14, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 15, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 16, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 17, 00:22:19.680 "state": "FREE", 00:22:19.680 "validity": 0.0 00:22:19.680 } 00:22:19.680 ], 00:22:19.680 "read-only": true 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "name": "cache_device", 00:22:19.680 "type": "bdev", 00:22:19.680 "chunks": [ 00:22:19.680 { 00:22:19.680 "id": 0, 00:22:19.680 "state": "CLOSED", 00:22:19.680 "utilization": 1.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 1, 00:22:19.680 "state": "CLOSED", 00:22:19.680 "utilization": 1.0 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 2, 00:22:19.680 "state": "OPEN", 00:22:19.680 "utilization": 0.001953125 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "id": 3, 00:22:19.680 "state": "OPEN", 00:22:19.680 "utilization": 0.0 00:22:19.680 } 00:22:19.680 ], 00:22:19.680 "read-only": true 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "name": "verbose_mode", 00:22:19.680 "value": true, 00:22:19.680 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:19.680 }, 00:22:19.680 { 00:22:19.680 "name": "prep_upgrade_on_shutdown", 00:22:19.680 "value": false, 00:22:19.680 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:19.680 } 00:22:19.680 ] 00:22:19.680 } 00:22:19.680 00:16:34 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:19.939 [2024-11-28 00:16:34.360904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.939 [2024-11-28 00:16:34.361020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:19.939 [2024-11-28 00:16:34.361062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:19.939 [2024-11-28 00:16:34.361079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.939 [2024-11-28 00:16:34.361110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.939 [2024-11-28 00:16:34.361126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:19.939 [2024-11-28 00:16:34.361133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:19.939 [2024-11-28 00:16:34.361138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.939 [2024-11-28 00:16:34.361153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:19.939 [2024-11-28 00:16:34.361159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:19.939 [2024-11-28 00:16:34.361165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:19.939 [2024-11-28 00:16:34.361170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:19.939 [2024-11-28 00:16:34.361223] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.300 ms, result 0 00:22:19.939 true 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:19.939 00:16:34 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:20.199 [2024-11-28 00:16:34.707835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.199 [2024-11-28 00:16:34.707875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:20.199 [2024-11-28 00:16:34.707885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:20.199 [2024-11-28 00:16:34.707891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.199 [2024-11-28 00:16:34.707910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.199 [2024-11-28 00:16:34.707916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:20.199 [2024-11-28 00:16:34.707923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:20.199 [2024-11-28 00:16:34.707928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.199 [2024-11-28 00:16:34.707943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.199 [2024-11-28 00:16:34.707948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:20.199 [2024-11-28 00:16:34.707954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:20.199 [2024-11-28 00:16:34.707959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.199 [2024-11-28 00:16:34.708005] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.162 ms, result 0 00:22:20.199 true 00:22:20.199 00:16:34 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:20.459 { 00:22:20.459 "name": "ftl", 00:22:20.459 "properties": [ 00:22:20.459 { 00:22:20.459 "name": "superblock_version", 00:22:20.459 "value": 5, 00:22:20.459 "read-only": true 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "name": "base_device", 00:22:20.459 "bands": [ 00:22:20.459 { 00:22:20.459 "id": 0, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 1, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 2, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 3, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 4, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 5, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 6, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 7, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 8, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 9, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 10, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 11, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 12, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 13, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 14, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 15, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 16, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 17, 00:22:20.459 "state": "FREE", 00:22:20.459 "validity": 0.0 00:22:20.459 } 00:22:20.459 ], 00:22:20.459 "read-only": true 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "name": "cache_device", 00:22:20.459 "type": "bdev", 00:22:20.459 "chunks": [ 00:22:20.459 { 00:22:20.459 "id": 0, 00:22:20.459 "state": "CLOSED", 00:22:20.459 "utilization": 1.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 1, 00:22:20.459 "state": "CLOSED", 00:22:20.459 "utilization": 1.0 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 2, 00:22:20.459 "state": "OPEN", 00:22:20.459 "utilization": 0.001953125 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "id": 3, 00:22:20.459 "state": "OPEN", 00:22:20.459 "utilization": 0.0 00:22:20.459 } 00:22:20.459 ], 00:22:20.459 "read-only": true 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "name": "verbose_mode", 00:22:20.459 "value": true, 00:22:20.459 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:20.459 }, 00:22:20.459 { 00:22:20.459 "name": "prep_upgrade_on_shutdown", 00:22:20.459 "value": true, 00:22:20.459 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:20.459 } 00:22:20.459 ] 00:22:20.459 } 00:22:20.459 00:16:34 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:20.459 00:16:34 -- ftl/common.sh@130 -- # [[ -n 87788 ]] 00:22:20.459 00:16:34 -- ftl/common.sh@131 -- # killprocess 87788 00:22:20.459 00:16:34 -- common/autotest_common.sh@936 -- # '[' -z 87788 ']' 00:22:20.459 00:16:34 -- common/autotest_common.sh@940 -- # kill -0 87788 00:22:20.459 00:16:34 -- common/autotest_common.sh@941 -- # uname 00:22:20.459 00:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.459 00:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87788 00:22:20.459 killing process with pid 87788 00:22:20.459 00:16:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.459 00:16:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.459 00:16:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87788' 00:22:20.459 00:16:34 -- common/autotest_common.sh@955 -- # kill 87788 00:22:20.459 00:16:34 -- common/autotest_common.sh@960 -- # wait 87788 00:22:20.459 [2024-11-28 00:16:35.019491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:22:20.459 [2024-11-28 00:16:35.022664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.459 [2024-11-28 00:16:35.022694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:20.459 [2024-11-28 00:16:35.022704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:20.459 [2024-11-28 00:16:35.022712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.459 [2024-11-28 00:16:35.022729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:20.460 [2024-11-28 00:16:35.023100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.460 [2024-11-28 00:16:35.023112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:20.460 [2024-11-28 00:16:35.023120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:22:20.460 [2024-11-28 00:16:35.023125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.922210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.922386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:28.572 [2024-11-28 00:16:41.922454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6899.035 ms 00:22:28.572 [2024-11-28 00:16:41.922476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.923523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.923595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:28.572 [2024-11-28 00:16:41.923646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.021 ms 00:22:28.572 [2024-11-28 00:16:41.923663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.924606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.924680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:22:28.572 [2024-11-28 00:16:41.924726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.862 ms 00:22:28.572 [2024-11-28 00:16:41.924744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.926064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.926154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:28.572 [2024-11-28 00:16:41.926195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:22:28.572 [2024-11-28 00:16:41.926212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.928261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.928355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:28.572 [2024-11-28 00:16:41.928379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.018 ms 00:22:28.572 [2024-11-28 00:16:41.928386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.928446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.928453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:28.572 [2024-11-28 00:16:41.928459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:28.572 [2024-11-28 00:16:41.928465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.572 [2024-11-28 00:16:41.929606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.572 [2024-11-28 00:16:41.929681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:22:28.572 [2024-11-28 00:16:41.929719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:22:28.572 [2024-11-28 00:16:41.929735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.930718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.930796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:22:28.573 [2024-11-28 00:16:41.930834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.930 ms 00:22:28.573 [2024-11-28 00:16:41.930850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.931873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.931950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:28.573 [2024-11-28 00:16:41.931987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.994 ms 00:22:28.573 [2024-11-28 00:16:41.932003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.932909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.932987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:28.573 [2024-11-28 00:16:41.933025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:22:28.573 [2024-11-28 00:16:41.933041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.933069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:28.573 [2024-11-28 00:16:41.933089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:28.573 [2024-11-28 00:16:41.933210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:28.573 [2024-11-28 00:16:41.933234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:28.573 [2024-11-28 00:16:41.933263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.573 [2024-11-28 00:16:41.933722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:28.573 [2024-11-28 00:16:41.933737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e11b55f6-8bf5-429b-9c26-2ec0793e0512 00:22:28.573 [2024-11-28 00:16:41.933783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:28.573 [2024-11-28 00:16:41.933799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:28.573 [2024-11-28 00:16:41.933813] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:28.573 [2024-11-28 00:16:41.933829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:28.573 [2024-11-28 00:16:41.933843] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:28.573 [2024-11-28 00:16:41.933858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:28.573 [2024-11-28 00:16:41.933872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:28.573 [2024-11-28 00:16:41.933886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:28.573 [2024-11-28 00:16:41.933899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:28.573 [2024-11-28 00:16:41.933933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.933942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:28.573 [2024-11-28 00:16:41.933948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.865 ms 00:22:28.573 [2024-11-28 00:16:41.933957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.935148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.935166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:28.573 [2024-11-28 00:16:41.935173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:22:28.573 [2024-11-28 00:16:41.935179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.935224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:28.573 [2024-11-28 00:16:41.935229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:28.573 [2024-11-28 00:16:41.935242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:22:28.573 [2024-11-28 00:16:41.935248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.939549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.939576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:28.573 [2024-11-28 00:16:41.939583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.939589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.939610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.939616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:28.573 [2024-11-28 00:16:41.939629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.939635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.939681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.939689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:28.573 [2024-11-28 00:16:41.939695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.939701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.939712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.939718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:28.573 [2024-11-28 00:16:41.939723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.939734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.947796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.947828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:28.573 [2024-11-28 00:16:41.947837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.947843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.950989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:28.573 [2024-11-28 00:16:41.951027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:28.573 [2024-11-28 00:16:41.951077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:28.573 [2024-11-28 00:16:41.951125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:28.573 [2024-11-28 00:16:41.951194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:28.573 [2024-11-28 00:16:41.951232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:28.573 [2024-11-28 00:16:41.951280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:28.573 [2024-11-28 00:16:41.951327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:28.573 [2024-11-28 00:16:41.951333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:28.573 [2024-11-28 00:16:41.951338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:28.573 [2024-11-28 00:16:41.951450] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6928.743 ms, result 0 00:22:30.475 00:16:44 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:30.475 00:16:44 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:30.475 00:16:44 -- ftl/common.sh@81 -- # local base_bdev= 00:22:30.475 00:16:44 -- ftl/common.sh@82 -- # local cache_bdev= 00:22:30.475 00:16:44 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:30.475 00:16:44 -- ftl/common.sh@89 -- # spdk_tgt_pid=88277 00:22:30.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.475 00:16:44 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:30.475 00:16:44 -- ftl/common.sh@91 -- # waitforlisten 88277 00:22:30.475 00:16:44 -- common/autotest_common.sh@829 -- # '[' -z 88277 ']' 00:22:30.475 00:16:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.475 00:16:44 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:30.475 00:16:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.475 00:16:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.475 00:16:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.475 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.475 [2024-11-28 00:16:44.661825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.475 [2024-11-28 00:16:44.661935] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88277 ] 00:22:30.475 [2024-11-28 00:16:44.809616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.475 [2024-11-28 00:16:44.839259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.475 [2024-11-28 00:16:44.839657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.475 [2024-11-28 00:16:45.070306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:30.475 [2024-11-28 00:16:45.070596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:30.735 [2024-11-28 00:16:45.213081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.213135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:30.735 [2024-11-28 00:16:45.213148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:30.735 [2024-11-28 00:16:45.213156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.213217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.213228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:30.735 [2024-11-28 00:16:45.213238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:22:30.735 [2024-11-28 00:16:45.213245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.213267] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:30.735 [2024-11-28 00:16:45.213512] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:30.735 [2024-11-28 00:16:45.213529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.213537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:30.735 [2024-11-28 00:16:45.213545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:22:30.735 [2024-11-28 00:16:45.213551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.214601] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:30.735 [2024-11-28 00:16:45.216761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.216795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:30.735 [2024-11-28 00:16:45.216812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.162 ms 00:22:30.735 [2024-11-28 00:16:45.216819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.216869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.216881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:30.735 [2024-11-28 00:16:45.216889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:22:30.735 [2024-11-28 00:16:45.216895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.221401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.221433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:30.735 [2024-11-28 00:16:45.221442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.452 ms 00:22:30.735 [2024-11-28 00:16:45.221449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.221484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.221492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:30.735 [2024-11-28 00:16:45.221500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:30.735 [2024-11-28 00:16:45.221512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.221562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.221573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:30.735 [2024-11-28 00:16:45.221581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:22:30.735 [2024-11-28 00:16:45.221588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.221609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:30.735 [2024-11-28 00:16:45.222883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.222910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:30.735 [2024-11-28 00:16:45.222923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.279 ms 00:22:30.735 [2024-11-28 00:16:45.222930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.222959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.222971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:30.735 [2024-11-28 00:16:45.222979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:30.735 [2024-11-28 00:16:45.222985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.223007] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:30.735 [2024-11-28 00:16:45.223024] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:22:30.735 [2024-11-28 00:16:45.223056] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:30.735 [2024-11-28 00:16:45.223074] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:22:30.735 [2024-11-28 00:16:45.223149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:22:30.735 [2024-11-28 00:16:45.223158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:30.735 [2024-11-28 00:16:45.223167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:22:30.735 [2024-11-28 00:16:45.223177] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223185] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223196] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:30.735 [2024-11-28 00:16:45.223203] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:30.735 [2024-11-28 00:16:45.223209] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:22:30.735 [2024-11-28 00:16:45.223216] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:22:30.735 [2024-11-28 00:16:45.223224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.223234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:30.735 [2024-11-28 00:16:45.223243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:22:30.735 [2024-11-28 00:16:45.223252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.223312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.735 [2024-11-28 00:16:45.223319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:30.735 [2024-11-28 00:16:45.223326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:22:30.735 [2024-11-28 00:16:45.223333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.735 [2024-11-28 00:16:45.223424] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:30.735 [2024-11-28 00:16:45.223479] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:30.735 [2024-11-28 00:16:45.223489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:30.735 [2024-11-28 00:16:45.223517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:30.735 [2024-11-28 00:16:45.223530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:30.735 [2024-11-28 00:16:45.223536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:30.735 [2024-11-28 00:16:45.223542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:30.735 [2024-11-28 00:16:45.223558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:30.735 [2024-11-28 00:16:45.223564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:30.735 [2024-11-28 00:16:45.223577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:30.735 [2024-11-28 00:16:45.223596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:22:30.735 [2024-11-28 00:16:45.223601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:22:30.735 [2024-11-28 00:16:45.223615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:22:30.735 [2024-11-28 00:16:45.223622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:30.735 [2024-11-28 00:16:45.223635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:30.735 [2024-11-28 00:16:45.223655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:30.735 [2024-11-28 00:16:45.223674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:30.735 [2024-11-28 00:16:45.223692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:30.735 [2024-11-28 00:16:45.223710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:30.735 [2024-11-28 00:16:45.223730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223742] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:30.735 [2024-11-28 00:16:45.223749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:30.735 [2024-11-28 00:16:45.223761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:30.735 [2024-11-28 00:16:45.223776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:30.735 [2024-11-28 00:16:45.223782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:30.735 [2024-11-28 00:16:45.223788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:30.735 [2024-11-28 00:16:45.223794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:30.735 [2024-11-28 00:16:45.223801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:30.735 [2024-11-28 00:16:45.223807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:30.735 [2024-11-28 00:16:45.223814] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:30.735 [2024-11-28 00:16:45.223823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:30.735 [2024-11-28 00:16:45.223833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:30.735 [2024-11-28 00:16:45.223841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:22:30.736 [2024-11-28 00:16:45.223866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:22:30.736 [2024-11-28 00:16:45.223873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:22:30.736 [2024-11-28 00:16:45.223882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:22:30.736 [2024-11-28 00:16:45.223889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:22:30.736 [2024-11-28 00:16:45.223923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:22:30.736 [2024-11-28 00:16:45.223930] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:30.736 [2024-11-28 00:16:45.223938] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223946] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:30.736 [2024-11-28 00:16:45.223953] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:30.736 [2024-11-28 00:16:45.223960] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:30.736 [2024-11-28 00:16:45.223966] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:30.736 [2024-11-28 00:16:45.223974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.223981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:30.736 [2024-11-28 00:16:45.223989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:22:30.736 [2024-11-28 00:16:45.223998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.229509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.229539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:30.736 [2024-11-28 00:16:45.229554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.467 ms 00:22:30.736 [2024-11-28 00:16:45.229564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.229604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.229611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:30.736 [2024-11-28 00:16:45.229626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:30.736 [2024-11-28 00:16:45.229635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.237956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.237988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:30.736 [2024-11-28 00:16:45.237997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.288 ms 00:22:30.736 [2024-11-28 00:16:45.238004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.238029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.238037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:30.736 [2024-11-28 00:16:45.238048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:30.736 [2024-11-28 00:16:45.238055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.238391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.238415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:30.736 [2024-11-28 00:16:45.238424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:22:30.736 [2024-11-28 00:16:45.238431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.238479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.238490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:30.736 [2024-11-28 00:16:45.238502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:30.736 [2024-11-28 00:16:45.238510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.243640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.243667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:30.736 [2024-11-28 00:16:45.243676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.107 ms 00:22:30.736 [2024-11-28 00:16:45.243682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.245824] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:30.736 [2024-11-28 00:16:45.245956] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:30.736 [2024-11-28 00:16:45.245969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.245977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:30.736 [2024-11-28 00:16:45.245984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.208 ms 00:22:30.736 [2024-11-28 00:16:45.245991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.250006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.250035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:30.736 [2024-11-28 00:16:45.250045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.980 ms 00:22:30.736 [2024-11-28 00:16:45.250053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.251453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.251481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:30.736 [2024-11-28 00:16:45.251490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.364 ms 00:22:30.736 [2024-11-28 00:16:45.251496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.736 [2024-11-28 00:16:45.252649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.736 [2024-11-28 00:16:45.252763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:30.736 [2024-11-28 00:16:45.252777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.120 ms 00:22:30.736 [2024-11-28 00:16:45.252785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.252973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.252983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:30.737 [2024-11-28 00:16:45.252990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.131 ms 00:22:30.737 [2024-11-28 00:16:45.252999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.270667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.270863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:30.737 [2024-11-28 00:16:45.270881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.651 ms 00:22:30.737 [2024-11-28 00:16:45.270889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.278209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:30.737 [2024-11-28 00:16:45.279013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.279042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:30.737 [2024-11-28 00:16:45.279056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.084 ms 00:22:30.737 [2024-11-28 00:16:45.279066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.279136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.279146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:30.737 [2024-11-28 00:16:45.279156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:30.737 [2024-11-28 00:16:45.279164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.279205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.279217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:30.737 [2024-11-28 00:16:45.279226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:30.737 [2024-11-28 00:16:45.279234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.280466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.280495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:22:30.737 [2024-11-28 00:16:45.280504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:22:30.737 [2024-11-28 00:16:45.280510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.280544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.280552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:30.737 [2024-11-28 00:16:45.280562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:30.737 [2024-11-28 00:16:45.280569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.280603] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:30.737 [2024-11-28 00:16:45.280612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.280619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:30.737 [2024-11-28 00:16:45.280627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:30.737 [2024-11-28 00:16:45.280634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.283647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.283680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:30.737 [2024-11-28 00:16:45.283691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.994 ms 00:22:30.737 [2024-11-28 00:16:45.283708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.283772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.737 [2024-11-28 00:16:45.283782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:30.737 [2024-11-28 00:16:45.283790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:22:30.737 [2024-11-28 00:16:45.283804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.737 [2024-11-28 00:16:45.284718] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 71.269 ms, result 0 00:22:30.737 [2024-11-28 00:16:45.299967] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.737 [2024-11-28 00:16:45.316001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:22:30.737 [2024-11-28 00:16:45.324093] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:31.000 00:16:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.000 00:16:45 -- common/autotest_common.sh@862 -- # return 0 00:22:31.000 00:16:45 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:31.000 00:16:45 -- ftl/common.sh@95 -- # return 0 00:22:31.000 00:16:45 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:31.259 [2024-11-28 00:16:45.637042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:31.259 [2024-11-28 00:16:45.637085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:31.259 [2024-11-28 00:16:45.637101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:31.259 [2024-11-28 00:16:45.637109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:31.259 [2024-11-28 00:16:45.637132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:31.259 [2024-11-28 00:16:45.637140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:31.259 [2024-11-28 00:16:45.637148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:31.259 [2024-11-28 00:16:45.637155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:31.259 [2024-11-28 00:16:45.637175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:31.259 [2024-11-28 00:16:45.637184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:31.259 [2024-11-28 00:16:45.637194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:31.259 [2024-11-28 00:16:45.637210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:31.259 [2024-11-28 00:16:45.637270] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.215 ms, result 0 00:22:31.259 true 00:22:31.259 00:16:45 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:31.259 { 00:22:31.260 "name": "ftl", 00:22:31.260 "properties": [ 00:22:31.260 { 00:22:31.260 "name": "superblock_version", 00:22:31.260 "value": 5, 00:22:31.260 "read-only": true 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "name": "base_device", 00:22:31.260 "bands": [ 00:22:31.260 { 00:22:31.260 "id": 0, 00:22:31.260 "state": "CLOSED", 00:22:31.260 "validity": 1.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 1, 00:22:31.260 "state": "CLOSED", 00:22:31.260 "validity": 1.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 2, 00:22:31.260 "state": "CLOSED", 00:22:31.260 "validity": 0.007843137254901933 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 3, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 4, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 5, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 6, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 7, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 8, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 9, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 10, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 11, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 12, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 13, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 14, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 15, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 16, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 17, 00:22:31.260 "state": "FREE", 00:22:31.260 "validity": 0.0 00:22:31.260 } 00:22:31.260 ], 00:22:31.260 "read-only": true 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "name": "cache_device", 00:22:31.260 "type": "bdev", 00:22:31.260 "chunks": [ 00:22:31.260 { 00:22:31.260 "id": 0, 00:22:31.260 "state": "OPEN", 00:22:31.260 "utilization": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 1, 00:22:31.260 "state": "OPEN", 00:22:31.260 "utilization": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 2, 00:22:31.260 "state": "FREE", 00:22:31.260 "utilization": 0.0 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "id": 3, 00:22:31.260 "state": "FREE", 00:22:31.260 "utilization": 0.0 00:22:31.260 } 00:22:31.260 ], 00:22:31.260 "read-only": true 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "name": "verbose_mode", 00:22:31.260 "value": true, 00:22:31.260 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:31.260 }, 00:22:31.260 { 00:22:31.260 "name": "prep_upgrade_on_shutdown", 00:22:31.260 "value": false, 00:22:31.260 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:31.260 } 00:22:31.260 ] 00:22:31.260 } 00:22:31.260 00:16:45 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:31.260 00:16:45 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:31.260 00:16:45 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:31.518 00:16:46 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:31.518 00:16:46 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:31.518 00:16:46 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:31.518 00:16:46 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:31.518 00:16:46 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:31.777 Validate MD5 checksum, iteration 1 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:31.777 00:16:46 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:31.777 00:16:46 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:31.777 00:16:46 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:31.777 00:16:46 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:31.777 00:16:46 -- ftl/common.sh@154 -- # return 0 00:22:31.777 00:16:46 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:31.777 [2024-11-28 00:16:46.270461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:31.777 [2024-11-28 00:16:46.270690] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88304 ] 00:22:32.035 [2024-11-28 00:16:46.420953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.035 [2024-11-28 00:16:46.451573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.412  [2024-11-28T00:16:48.275Z] Copying: 702/1024 [MB] (702 MBps) [2024-11-28T00:16:48.843Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:22:34.241 00:22:34.241 00:16:48 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:34.241 00:16:48 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:36.775 Validate MD5 checksum, iteration 2 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@103 -- # sum=7a6d5e28430162485bbf2a2d2cd5bb81 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a6d5e28430162485bbf2a2d2cd5bb81 != \7\a\6\d\5\e\2\8\4\3\0\1\6\2\4\8\5\b\b\f\2\a\2\d\2\c\d\5\b\b\8\1 ]] 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:36.775 00:16:50 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:36.775 00:16:50 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:36.775 00:16:50 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:36.775 00:16:50 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:36.775 00:16:50 -- ftl/common.sh@154 -- # return 0 00:22:36.775 00:16:50 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:36.775 [2024-11-28 00:16:50.995335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:36.775 [2024-11-28 00:16:50.995450] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88360 ] 00:22:36.775 [2024-11-28 00:16:51.142739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.775 [2024-11-28 00:16:51.171752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.150  [2024-11-28T00:16:53.011Z] Copying: 694/1024 [MB] (694 MBps) [2024-11-28T00:16:53.578Z] Copying: 1024/1024 [MB] (average 693 MBps) 00:22:38.976 00:22:38.976 00:16:53 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:38.976 00:16:53 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@103 -- # sum=ec5ae91a5c062a1b7d27f1758d8d5c73 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@105 -- # [[ ec5ae91a5c062a1b7d27f1758d8d5c73 != \e\c\5\a\e\9\1\a\5\c\0\6\2\a\1\b\7\d\2\7\f\1\7\5\8\d\8\d\5\c\7\3 ]] 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:22:41.531 00:16:55 -- ftl/common.sh@137 -- # [[ -n 88277 ]] 00:22:41.531 00:16:55 -- ftl/common.sh@138 -- # kill -9 88277 00:22:41.531 00:16:55 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:22:41.531 00:16:55 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:22:41.531 00:16:55 -- ftl/common.sh@81 -- # local base_bdev= 00:22:41.531 00:16:55 -- ftl/common.sh@82 -- # local cache_bdev= 00:22:41.531 00:16:55 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:41.531 00:16:55 -- ftl/common.sh@89 -- # spdk_tgt_pid=88410 00:22:41.531 00:16:55 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:41.531 00:16:55 -- ftl/common.sh@91 -- # waitforlisten 88410 00:22:41.531 00:16:55 -- common/autotest_common.sh@829 -- # '[' -z 88410 ']' 00:22:41.531 00:16:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.531 00:16:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.531 00:16:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.531 00:16:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.531 00:16:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.531 00:16:55 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:41.531 [2024-11-28 00:16:55.643487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.531 [2024-11-28 00:16:55.643593] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88410 ] 00:22:41.531 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 88277 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:22:41.531 [2024-11-28 00:16:55.799299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.531 [2024-11-28 00:16:55.827709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:41.531 [2024-11-28 00:16:55.827878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.531 [2024-11-28 00:16:56.050952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:41.531 [2024-11-28 00:16:56.051173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:41.791 [2024-11-28 00:16:56.183345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.791 [2024-11-28 00:16:56.183387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:41.791 [2024-11-28 00:16:56.183398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:41.791 [2024-11-28 00:16:56.183403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.791 [2024-11-28 00:16:56.183441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.791 [2024-11-28 00:16:56.183449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:41.791 [2024-11-28 00:16:56.183456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:22:41.791 [2024-11-28 00:16:56.183462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.791 [2024-11-28 00:16:56.183475] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:41.791 [2024-11-28 00:16:56.183647] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:41.791 [2024-11-28 00:16:56.183659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.791 [2024-11-28 00:16:56.183665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:41.791 [2024-11-28 00:16:56.183671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:22:41.791 [2024-11-28 00:16:56.183681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.791 [2024-11-28 00:16:56.183870] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:41.791 [2024-11-28 00:16:56.186941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.791 [2024-11-28 00:16:56.186966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:41.791 [2024-11-28 00:16:56.186977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.070 ms 00:22:41.791 [2024-11-28 00:16:56.186983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.791 [2024-11-28 00:16:56.187750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.791 [2024-11-28 00:16:56.187773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:41.791 [2024-11-28 00:16:56.187785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:22:41.792 [2024-11-28 00:16:56.187791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.188012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:41.792 [2024-11-28 00:16:56.188019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:22:41.792 [2024-11-28 00:16:56.188025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.188060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:41.792 [2024-11-28 00:16:56.188066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:22:41.792 [2024-11-28 00:16:56.188076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.188099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:41.792 [2024-11-28 00:16:56.188106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:41.792 [2024-11-28 00:16:56.188115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:41.792 [2024-11-28 00:16:56.188833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.188845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:41.792 [2024-11-28 00:16:56.188852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.706 ms 00:22:41.792 [2024-11-28 00:16:56.188861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.188888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:41.792 [2024-11-28 00:16:56.188896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:41.792 [2024-11-28 00:16:56.188905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.188924] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:41.792 [2024-11-28 00:16:56.188937] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:22:41.792 [2024-11-28 00:16:56.188961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:41.792 [2024-11-28 00:16:56.188974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:22:41.792 [2024-11-28 00:16:56.189032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:22:41.792 [2024-11-28 00:16:56.189039] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:41.792 [2024-11-28 00:16:56.189049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:22:41.792 [2024-11-28 00:16:56.189056] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189063] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189069] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:41.792 [2024-11-28 00:16:56.189074] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:41.792 [2024-11-28 00:16:56.189079] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:22:41.792 [2024-11-28 00:16:56.189085] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:22:41.792 [2024-11-28 00:16:56.189090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.189095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:41.792 [2024-11-28 00:16:56.189105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:22:41.792 [2024-11-28 00:16:56.189110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.189159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.792 [2024-11-28 00:16:56.189165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:41.792 [2024-11-28 00:16:56.189170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:22:41.792 [2024-11-28 00:16:56.189175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.792 [2024-11-28 00:16:56.189239] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:41.792 [2024-11-28 00:16:56.189246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:41.792 [2024-11-28 00:16:56.189252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:41.792 [2024-11-28 00:16:56.189270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:41.792 [2024-11-28 00:16:56.189280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:41.792 [2024-11-28 00:16:56.189285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:41.792 [2024-11-28 00:16:56.189292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:41.792 [2024-11-28 00:16:56.189304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:41.792 [2024-11-28 00:16:56.189309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:41.792 [2024-11-28 00:16:56.189319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:41.792 [2024-11-28 00:16:56.189334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:22:41.792 [2024-11-28 00:16:56.189338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:22:41.792 [2024-11-28 00:16:56.189348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:22:41.792 [2024-11-28 00:16:56.189353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:41.792 [2024-11-28 00:16:56.189374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:41.792 [2024-11-28 00:16:56.189391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:41.792 [2024-11-28 00:16:56.189406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:41.792 [2024-11-28 00:16:56.189421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:41.792 [2024-11-28 00:16:56.189435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:41.792 [2024-11-28 00:16:56.189450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189459] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:41.792 [2024-11-28 00:16:56.189466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:41.792 [2024-11-28 00:16:56.189471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:41.792 [2024-11-28 00:16:56.189487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:41.792 [2024-11-28 00:16:56.189493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:41.792 [2024-11-28 00:16:56.189499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:41.792 [2024-11-28 00:16:56.189505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:41.792 [2024-11-28 00:16:56.189510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:41.792 [2024-11-28 00:16:56.189516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:41.792 [2024-11-28 00:16:56.189522] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:41.792 [2024-11-28 00:16:56.189530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.792 [2024-11-28 00:16:56.189540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:41.792 [2024-11-28 00:16:56.189547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:22:41.792 [2024-11-28 00:16:56.189553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:22:41.792 [2024-11-28 00:16:56.189559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:22:41.792 [2024-11-28 00:16:56.189565] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:22:41.792 [2024-11-28 00:16:56.189573] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:22:41.793 [2024-11-28 00:16:56.189579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:22:41.793 [2024-11-28 00:16:56.189585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:22:41.793 [2024-11-28 00:16:56.189616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:22:41.793 [2024-11-28 00:16:56.189622] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:41.793 [2024-11-28 00:16:56.189628] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189635] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:41.793 [2024-11-28 00:16:56.189641] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:41.793 [2024-11-28 00:16:56.189647] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:41.793 [2024-11-28 00:16:56.189653] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:41.793 [2024-11-28 00:16:56.189659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.189665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:41.793 [2024-11-28 00:16:56.189674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:22:41.793 [2024-11-28 00:16:56.189680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.193622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.193644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:41.793 [2024-11-28 00:16:56.193652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.910 ms 00:22:41.793 [2024-11-28 00:16:56.193658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.193685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.193696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:41.793 [2024-11-28 00:16:56.193702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:41.793 [2024-11-28 00:16:56.193708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.201264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.201290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:41.793 [2024-11-28 00:16:56.201297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.531 ms 00:22:41.793 [2024-11-28 00:16:56.201303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.201320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.201326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:41.793 [2024-11-28 00:16:56.201335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:41.793 [2024-11-28 00:16:56.201340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.201416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.201424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:41.793 [2024-11-28 00:16:56.201430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:41.793 [2024-11-28 00:16:56.201436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.201464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.201470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:41.793 [2024-11-28 00:16:56.201475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:41.793 [2024-11-28 00:16:56.201483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.206191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.206216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:41.793 [2024-11-28 00:16:56.206223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.693 ms 00:22:41.793 [2024-11-28 00:16:56.206233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.206289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.206296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:22:41.793 [2024-11-28 00:16:56.206305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:41.793 [2024-11-28 00:16:56.206312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.209427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.209454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:22:41.793 [2024-11-28 00:16:56.209461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.102 ms 00:22:41.793 [2024-11-28 00:16:56.209467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.210308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.210335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:41.793 [2024-11-28 00:16:56.210344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:22:41.793 [2024-11-28 00:16:56.210350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.225398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.225430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:41.793 [2024-11-28 00:16:56.225440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.020 ms 00:22:41.793 [2024-11-28 00:16:56.225446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.225510] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:22:41.793 [2024-11-28 00:16:56.225542] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:22:41.793 [2024-11-28 00:16:56.225571] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:22:41.793 [2024-11-28 00:16:56.225600] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:22:41.793 [2024-11-28 00:16:56.225606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.225611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:22:41.793 [2024-11-28 00:16:56.225619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.127 ms 00:22:41.793 [2024-11-28 00:16:56.225625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.225661] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:22:41.793 [2024-11-28 00:16:56.225671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.225676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:22:41.793 [2024-11-28 00:16:56.225682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:41.793 [2024-11-28 00:16:56.225687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.227774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.227801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:22:41.793 [2024-11-28 00:16:56.227808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.071 ms 00:22:41.793 [2024-11-28 00:16:56.227818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.228289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.228310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:22:41.793 [2024-11-28 00:16:56.228317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:41.793 [2024-11-28 00:16:56.228322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.228342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:41.793 [2024-11-28 00:16:56.228348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:22:41.793 [2024-11-28 00:16:56.228354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:41.793 [2024-11-28 00:16:56.228359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:41.793 [2024-11-28 00:16:56.228490] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:22:42.129 [2024-11-28 00:16:56.623931] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:22:42.129 [2024-11-28 00:16:56.624078] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:22:42.696 [2024-11-28 00:16:57.128388] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:22:42.696 [2024-11-28 00:16:57.128477] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:42.696 [2024-11-28 00:16:57.128490] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:42.696 [2024-11-28 00:16:57.128500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.128508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:22:42.696 [2024-11-28 00:16:57.128520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 900.114 ms 00:22:42.696 [2024-11-28 00:16:57.128527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.128557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.128568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:22:42.696 [2024-11-28 00:16:57.128576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:42.696 [2024-11-28 00:16:57.128583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.136407] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:42.696 [2024-11-28 00:16:57.136503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.136513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:42.696 [2024-11-28 00:16:57.136527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.904 ms 00:22:42.696 [2024-11-28 00:16:57.136537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.137190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.137212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:22:42.696 [2024-11-28 00:16:57.137221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.591 ms 00:22:42.696 [2024-11-28 00:16:57.137228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.139462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.139479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:22:42.696 [2024-11-28 00:16:57.139488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.218 ms 00:22:42.696 [2024-11-28 00:16:57.139500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.143538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.143568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:22:42.696 [2024-11-28 00:16:57.143577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.018 ms 00:22:42.696 [2024-11-28 00:16:57.143584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.143668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.143678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:42.696 [2024-11-28 00:16:57.143690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:22:42.696 [2024-11-28 00:16:57.143697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.144873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.144898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:22:42.696 [2024-11-28 00:16:57.144906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.157 ms 00:22:42.696 [2024-11-28 00:16:57.144913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.696 [2024-11-28 00:16:57.144936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.696 [2024-11-28 00:16:57.144943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:42.697 [2024-11-28 00:16:57.144951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:42.697 [2024-11-28 00:16:57.144958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.697 [2024-11-28 00:16:57.144997] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:42.697 [2024-11-28 00:16:57.145009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.697 [2024-11-28 00:16:57.145021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:42.697 [2024-11-28 00:16:57.145028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:22:42.697 [2024-11-28 00:16:57.145035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.697 [2024-11-28 00:16:57.145082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:42.697 [2024-11-28 00:16:57.145090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:42.697 [2024-11-28 00:16:57.145101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:22:42.697 [2024-11-28 00:16:57.145108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:42.697 [2024-11-28 00:16:57.145954] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 962.227 ms, result 0 00:22:42.697 [2024-11-28 00:16:57.161167] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.697 [2024-11-28 00:16:57.177196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:22:42.697 [2024-11-28 00:16:57.185289] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:43.263 00:16:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.263 00:16:57 -- common/autotest_common.sh@862 -- # return 0 00:22:43.263 00:16:57 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:43.263 00:16:57 -- ftl/common.sh@95 -- # return 0 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:43.263 Validate MD5 checksum, iteration 1 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:43.263 00:16:57 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:43.263 00:16:57 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:43.263 00:16:57 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:43.263 00:16:57 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:43.263 00:16:57 -- ftl/common.sh@154 -- # return 0 00:22:43.263 00:16:57 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:43.263 [2024-11-28 00:16:57.840919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:43.263 [2024-11-28 00:16:57.841033] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88447 ] 00:22:43.523 [2024-11-28 00:16:57.987961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.523 [2024-11-28 00:16:58.017630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.904  [2024-11-28T00:17:00.074Z] Copying: 717/1024 [MB] (717 MBps) [2024-11-28T00:17:00.640Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:22:46.038 00:22:46.038 00:17:00 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:46.038 00:17:00 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:47.940 Validate MD5 checksum, iteration 2 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@103 -- # sum=7a6d5e28430162485bbf2a2d2cd5bb81 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@105 -- # [[ 7a6d5e28430162485bbf2a2d2cd5bb81 != \7\a\6\d\5\e\2\8\4\3\0\1\6\2\4\8\5\b\b\f\2\a\2\d\2\c\d\5\b\b\8\1 ]] 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:47.940 00:17:02 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:47.940 00:17:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:47.940 00:17:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:47.940 00:17:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:47.940 00:17:02 -- ftl/common.sh@154 -- # return 0 00:22:47.940 00:17:02 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:47.940 [2024-11-28 00:17:02.518840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:47.940 [2024-11-28 00:17:02.518944] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88498 ] 00:22:48.198 [2024-11-28 00:17:02.666414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.198 [2024-11-28 00:17:02.695349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.572  [2024-11-28T00:17:04.741Z] Copying: 714/1024 [MB] (714 MBps) [2024-11-28T00:17:10.021Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:22:55.419 00:22:55.676 00:17:10 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:55.676 00:17:10 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=ec5ae91a5c062a1b7d27f1758d8d5c73 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ ec5ae91a5c062a1b7d27f1758d8d5c73 != \e\c\5\a\e\9\1\a\5\c\0\6\2\a\1\b\7\d\2\7\f\1\7\5\8\d\8\d\5\c\7\3 ]] 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:22:58.208 00:17:12 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:22:58.208 00:17:12 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:22:58.208 00:17:12 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:22:58.208 00:17:12 -- ftl/common.sh@130 -- # [[ -n 88410 ]] 00:22:58.208 00:17:12 -- ftl/common.sh@131 -- # killprocess 88410 00:22:58.208 00:17:12 -- common/autotest_common.sh@936 -- # '[' -z 88410 ']' 00:22:58.208 00:17:12 -- common/autotest_common.sh@940 -- # kill -0 88410 00:22:58.208 00:17:12 -- common/autotest_common.sh@941 -- # uname 00:22:58.208 00:17:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.208 00:17:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88410 00:22:58.208 killing process with pid 88410 00:22:58.208 00:17:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:58.208 00:17:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:58.208 00:17:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88410' 00:22:58.208 00:17:12 -- common/autotest_common.sh@955 -- # kill 88410 00:22:58.208 00:17:12 -- common/autotest_common.sh@960 -- # wait 88410 00:22:58.208 [2024-11-28 00:17:12.421830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:22:58.208 [2024-11-28 00:17:12.424679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.424713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:58.208 [2024-11-28 00:17:12.424723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:58.208 [2024-11-28 00:17:12.424732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.424749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:58.208 [2024-11-28 00:17:12.425134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.425159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:58.208 [2024-11-28 00:17:12.425167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:22:58.208 [2024-11-28 00:17:12.425173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.425377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.425390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:58.208 [2024-11-28 00:17:12.425397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:22:58.208 [2024-11-28 00:17:12.425403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.426525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.426553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:58.208 [2024-11-28 00:17:12.426560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.109 ms 00:22:58.208 [2024-11-28 00:17:12.426565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.427406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.427425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:22:58.208 [2024-11-28 00:17:12.427433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.816 ms 00:22:58.208 [2024-11-28 00:17:12.427439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.428774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.428810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:58.208 [2024-11-28 00:17:12.428817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.308 ms 00:22:58.208 [2024-11-28 00:17:12.428823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.430041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.430071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:58.208 [2024-11-28 00:17:12.430078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.192 ms 00:22:58.208 [2024-11-28 00:17:12.430084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.430154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.430166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:58.208 [2024-11-28 00:17:12.430172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:22:58.208 [2024-11-28 00:17:12.430178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.431560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.431589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:22:58.208 [2024-11-28 00:17:12.431596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.369 ms 00:22:58.208 [2024-11-28 00:17:12.431602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.432972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.433001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:22:58.208 [2024-11-28 00:17:12.433007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.346 ms 00:22:58.208 [2024-11-28 00:17:12.433012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.434202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.434231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:58.208 [2024-11-28 00:17:12.434238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.164 ms 00:22:58.208 [2024-11-28 00:17:12.434243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.435273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.208 [2024-11-28 00:17:12.435300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:58.208 [2024-11-28 00:17:12.435307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.974 ms 00:22:58.208 [2024-11-28 00:17:12.435313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.208 [2024-11-28 00:17:12.435338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:58.208 [2024-11-28 00:17:12.435348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:58.208 [2024-11-28 00:17:12.435357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:58.208 [2024-11-28 00:17:12.435381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:58.208 [2024-11-28 00:17:12.435387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:58.209 [2024-11-28 00:17:12.435476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:58.209 [2024-11-28 00:17:12.435482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e11b55f6-8bf5-429b-9c26-2ec0793e0512 00:22:58.209 [2024-11-28 00:17:12.435489] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:58.209 [2024-11-28 00:17:12.435494] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:22:58.209 [2024-11-28 00:17:12.435499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:22:58.209 [2024-11-28 00:17:12.435505] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:22:58.209 [2024-11-28 00:17:12.435512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:58.209 [2024-11-28 00:17:12.435518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:58.209 [2024-11-28 00:17:12.435524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:58.209 [2024-11-28 00:17:12.435529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:58.209 [2024-11-28 00:17:12.435534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:58.209 [2024-11-28 00:17:12.435539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.209 [2024-11-28 00:17:12.435545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:58.209 [2024-11-28 00:17:12.435551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:22:58.209 [2024-11-28 00:17:12.435557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.436783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.209 [2024-11-28 00:17:12.436809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:58.209 [2024-11-28 00:17:12.436820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:22:58.209 [2024-11-28 00:17:12.436826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.436874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.209 [2024-11-28 00:17:12.436880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:58.209 [2024-11-28 00:17:12.436886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:58.209 [2024-11-28 00:17:12.436893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.441631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.441661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:58.209 [2024-11-28 00:17:12.441668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.441674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.441696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.441703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:58.209 [2024-11-28 00:17:12.441708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.441714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.441806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.441821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:58.209 [2024-11-28 00:17:12.441830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.441836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.441859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.441866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:58.209 [2024-11-28 00:17:12.441871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.441876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.450083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.450117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:58.209 [2024-11-28 00:17:12.450129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.450135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:58.209 [2024-11-28 00:17:12.453617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:58.209 [2024-11-28 00:17:12.453669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:58.209 [2024-11-28 00:17:12.453754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:58.209 [2024-11-28 00:17:12.453826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:58.209 [2024-11-28 00:17:12.453872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:58.209 [2024-11-28 00:17:12.453922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.453965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:58.209 [2024-11-28 00:17:12.453972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:58.209 [2024-11-28 00:17:12.453978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:58.209 [2024-11-28 00:17:12.453984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.209 [2024-11-28 00:17:12.454080] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 29.379 ms, result 0 00:22:58.209 00:17:12 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:58.209 00:17:12 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.209 00:17:12 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:22:58.209 00:17:12 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:22:58.209 00:17:12 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:22:58.209 00:17:12 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.209 00:17:12 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:22:58.209 Remove shared memory files 00:22:58.209 00:17:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.209 00:17:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.209 00:17:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.209 00:17:12 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid88277 00:22:58.209 00:17:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.209 00:17:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.209 ************************************ 00:22:58.209 END TEST ftl_upgrade_shutdown 00:22:58.209 ************************************ 00:22:58.209 00:22:58.209 real 1m7.409s 00:22:58.209 user 1m32.258s 00:22:58.209 sys 0m16.882s 00:22:58.209 00:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:58.209 00:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.209 Process with pid 81896 is not found 00:22:58.209 00:17:12 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:22:58.209 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:22:58.209 00:17:12 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:22:58.209 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:22:58.209 00:17:12 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:22:58.209 00:17:12 -- ftl/ftl.sh@14 -- # killprocess 81896 00:22:58.209 00:17:12 -- common/autotest_common.sh@936 -- # '[' -z 81896 ']' 00:22:58.209 00:17:12 -- common/autotest_common.sh@940 -- # kill -0 81896 00:22:58.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81896) - No such process 00:22:58.209 00:17:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81896 is not found' 00:22:58.209 00:17:12 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:22:58.209 00:17:12 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88639 00:22:58.210 00:17:12 -- ftl/ftl.sh@20 -- # waitforlisten 88639 00:22:58.210 00:17:12 -- common/autotest_common.sh@829 -- # '[' -z 88639 ']' 00:22:58.210 00:17:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.210 00:17:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.210 00:17:12 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.210 00:17:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.210 00:17:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.210 00:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.210 [2024-11-28 00:17:12.731249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:58.210 [2024-11-28 00:17:12.731358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88639 ] 00:22:58.469 [2024-11-28 00:17:12.877016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.469 [2024-11-28 00:17:12.907989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:58.469 [2024-11-28 00:17:12.908207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.036 00:17:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.036 00:17:13 -- common/autotest_common.sh@862 -- # return 0 00:22:59.036 00:17:13 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:59.294 nvme0n1 00:22:59.295 00:17:13 -- ftl/ftl.sh@22 -- # clear_lvols 00:22:59.295 00:17:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:59.295 00:17:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:59.553 00:17:13 -- ftl/common.sh@28 -- # stores=19849a54-ba3e-4c6a-8b42-0e6c510257ae 00:22:59.553 00:17:13 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:59.553 00:17:13 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19849a54-ba3e-4c6a-8b42-0e6c510257ae 00:22:59.812 00:17:14 -- ftl/ftl.sh@23 -- # killprocess 88639 00:22:59.812 00:17:14 -- common/autotest_common.sh@936 -- # '[' -z 88639 ']' 00:22:59.812 00:17:14 -- common/autotest_common.sh@940 -- # kill -0 88639 00:22:59.812 00:17:14 -- common/autotest_common.sh@941 -- # uname 00:22:59.812 00:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.812 00:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88639 00:22:59.812 00:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:59.812 00:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:59.812 killing process with pid 88639 00:22:59.812 00:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88639' 00:22:59.812 00:17:14 -- common/autotest_common.sh@955 -- # kill 88639 00:22:59.812 00:17:14 -- common/autotest_common.sh@960 -- # wait 88639 00:23:00.071 00:17:14 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:00.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.330 Waiting for block devices as requested 00:23:00.330 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.330 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.330 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.330 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:23:05.610 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:23:05.610 00:17:19 -- ftl/ftl.sh@28 -- # remove_shm 00:23:05.610 Remove shared memory files 00:23:05.610 00:17:19 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:05.610 00:17:19 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:05.610 00:17:19 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:05.610 00:17:19 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:05.610 00:17:20 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:05.610 00:17:20 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:05.610 00:23:05.610 real 9m10.084s 00:23:05.610 user 11m6.295s 00:23:05.610 sys 0m59.852s 00:23:05.610 00:17:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:05.610 ************************************ 00:23:05.610 END TEST ftl 00:23:05.610 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:05.610 ************************************ 00:23:05.610 00:17:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:05.610 00:17:20 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:05.610 00:17:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:05.610 00:17:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:05.610 00:17:20 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:23:05.610 00:17:20 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:23:05.610 00:17:20 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:05.610 00:17:20 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:05.610 00:17:20 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:23:05.610 00:17:20 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:23:05.610 00:17:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.610 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:05.610 00:17:20 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:23:05.610 00:17:20 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:23:05.610 00:17:20 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:23:05.610 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.037 INFO: APP EXITING 00:23:07.037 INFO: killing all VMs 00:23:07.037 INFO: killing vhost app 00:23:07.037 INFO: EXIT DONE 00:23:07.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.869 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:23:07.869 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:23:07.869 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:23:07.869 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:23:08.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:08.440 Cleaning 00:23:08.440 Removing: /var/run/dpdk/spdk0/config 00:23:08.440 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:08.440 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:08.440 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:08.440 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:08.440 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:08.701 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:08.701 Removing: /var/run/dpdk/spdk0 00:23:08.701 Removing: /var/run/dpdk/spdk_pid68427 00:23:08.701 Removing: /var/run/dpdk/spdk_pid68584 00:23:08.701 Removing: /var/run/dpdk/spdk_pid68867 00:23:08.701 Removing: /var/run/dpdk/spdk_pid68945 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69022 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69117 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69191 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69235 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69267 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69331 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69407 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69834 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69876 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69917 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69933 00:23:08.701 Removing: /var/run/dpdk/spdk_pid69991 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70007 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70064 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70076 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70118 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70136 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70178 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70190 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70316 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70353 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70435 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70483 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70509 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70570 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70591 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70625 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70641 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70677 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70692 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70727 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70748 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70778 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70798 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70834 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70849 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70884 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70905 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70935 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70961 00:23:08.701 Removing: /var/run/dpdk/spdk_pid70991 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71006 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71047 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71062 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71092 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71120 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71150 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71165 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71206 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71221 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71261 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71277 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71307 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71333 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71364 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71381 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71421 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71440 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71479 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71497 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71535 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71556 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71586 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71606 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71643 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71715 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71811 00:23:08.701 Removing: /var/run/dpdk/spdk_pid71970 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72037 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72063 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72488 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72854 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72958 00:23:08.701 Removing: /var/run/dpdk/spdk_pid72989 00:23:08.701 Removing: /var/run/dpdk/spdk_pid73020 00:23:08.701 Removing: /var/run/dpdk/spdk_pid73092 00:23:08.701 Removing: /var/run/dpdk/spdk_pid73730 00:23:08.701 Removing: /var/run/dpdk/spdk_pid73755 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74204 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74322 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74420 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74462 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74482 00:23:08.701 Removing: /var/run/dpdk/spdk_pid74508 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76400 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76517 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76527 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76539 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76614 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76628 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76641 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76724 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76728 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76746 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76811 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76815 00:23:08.701 Removing: /var/run/dpdk/spdk_pid76827 00:23:08.701 Removing: /var/run/dpdk/spdk_pid78253 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78333 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78456 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78514 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78564 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78618 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78695 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78758 00:23:08.961 Removing: /var/run/dpdk/spdk_pid78894 00:23:08.961 Removing: /var/run/dpdk/spdk_pid79264 00:23:08.961 Removing: /var/run/dpdk/spdk_pid79284 00:23:08.961 Removing: /var/run/dpdk/spdk_pid79710 00:23:08.961 Removing: /var/run/dpdk/spdk_pid79885 00:23:08.961 Removing: /var/run/dpdk/spdk_pid79978 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80070 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80108 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80129 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80507 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80540 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80596 00:23:08.961 Removing: /var/run/dpdk/spdk_pid80955 00:23:08.961 Removing: /var/run/dpdk/spdk_pid81093 00:23:08.961 Removing: /var/run/dpdk/spdk_pid81896 00:23:08.961 Removing: /var/run/dpdk/spdk_pid82005 00:23:08.961 Removing: /var/run/dpdk/spdk_pid82192 00:23:08.961 Removing: /var/run/dpdk/spdk_pid82267 00:23:08.961 Removing: /var/run/dpdk/spdk_pid82547 00:23:08.961 Removing: /var/run/dpdk/spdk_pid82768 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83157 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83361 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83430 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83468 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83565 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83585 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83621 00:23:08.961 Removing: /var/run/dpdk/spdk_pid83773 00:23:08.961 Removing: /var/run/dpdk/spdk_pid84014 00:23:08.961 Removing: /var/run/dpdk/spdk_pid84409 00:23:08.961 Removing: /var/run/dpdk/spdk_pid84852 00:23:08.961 Removing: /var/run/dpdk/spdk_pid85274 00:23:08.961 Removing: /var/run/dpdk/spdk_pid85699 00:23:08.961 Removing: /var/run/dpdk/spdk_pid85831 00:23:08.961 Removing: /var/run/dpdk/spdk_pid85917 00:23:08.961 Removing: /var/run/dpdk/spdk_pid86304 00:23:08.961 Removing: /var/run/dpdk/spdk_pid86357 00:23:08.961 Removing: /var/run/dpdk/spdk_pid86648 00:23:08.961 Removing: /var/run/dpdk/spdk_pid87083 00:23:08.961 Removing: /var/run/dpdk/spdk_pid87788 00:23:08.961 Removing: /var/run/dpdk/spdk_pid87900 00:23:08.961 Removing: /var/run/dpdk/spdk_pid87925 00:23:08.961 Removing: /var/run/dpdk/spdk_pid87978 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88028 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88077 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88277 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88304 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88360 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88410 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88447 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88498 00:23:08.961 Removing: /var/run/dpdk/spdk_pid88639 00:23:08.961 Clean 00:23:08.961 killing process with pid 60603 00:23:09.221 killing process with pid 60621 00:23:09.221 00:17:23 -- common/autotest_common.sh@1446 -- # return 0 00:23:09.221 00:17:23 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:23:09.221 00:17:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.221 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.221 00:17:23 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:23:09.221 00:17:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.221 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.221 00:17:23 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:09.221 00:17:23 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:09.221 00:17:23 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:09.221 00:17:23 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:23:09.221 00:17:23 -- spdk/autotest.sh@383 -- # hostname 00:23:09.221 00:17:23 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:09.481 geninfo: WARNING: invalid characters removed from testname! 00:23:36.057 00:17:46 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:36.057 00:17:49 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:37.958 00:17:52 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:40.491 00:17:54 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:41.864 00:17:56 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:43.766 00:17:58 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:46.298 00:18:00 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:46.298 00:18:00 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:23:46.298 00:18:00 -- common/autotest_common.sh@1690 -- $ lcov --version 00:23:46.298 00:18:00 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:23:46.298 00:18:00 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:23:46.298 00:18:00 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:23:46.298 00:18:00 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:23:46.298 00:18:00 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:23:46.298 00:18:00 -- scripts/common.sh@335 -- $ IFS=.-: 00:23:46.298 00:18:00 -- scripts/common.sh@335 -- $ read -ra ver1 00:23:46.298 00:18:00 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:46.298 00:18:00 -- scripts/common.sh@336 -- $ read -ra ver2 00:23:46.298 00:18:00 -- scripts/common.sh@337 -- $ local 'op=<' 00:23:46.298 00:18:00 -- scripts/common.sh@339 -- $ ver1_l=2 00:23:46.298 00:18:00 -- scripts/common.sh@340 -- $ ver2_l=1 00:23:46.298 00:18:00 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:23:46.298 00:18:00 -- scripts/common.sh@343 -- $ case "$op" in 00:23:46.298 00:18:00 -- scripts/common.sh@344 -- $ : 1 00:23:46.298 00:18:00 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:23:46.298 00:18:00 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.298 00:18:00 -- scripts/common.sh@364 -- $ decimal 1 00:23:46.298 00:18:00 -- scripts/common.sh@352 -- $ local d=1 00:23:46.298 00:18:00 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:46.298 00:18:00 -- scripts/common.sh@354 -- $ echo 1 00:23:46.298 00:18:00 -- scripts/common.sh@364 -- $ ver1[v]=1 00:23:46.298 00:18:00 -- scripts/common.sh@365 -- $ decimal 2 00:23:46.298 00:18:00 -- scripts/common.sh@352 -- $ local d=2 00:23:46.298 00:18:00 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:46.298 00:18:00 -- scripts/common.sh@354 -- $ echo 2 00:23:46.298 00:18:00 -- scripts/common.sh@365 -- $ ver2[v]=2 00:23:46.298 00:18:00 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:23:46.298 00:18:00 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:23:46.298 00:18:00 -- scripts/common.sh@367 -- $ return 0 00:23:46.298 00:18:00 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.298 00:18:00 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:23:46.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.298 --rc genhtml_branch_coverage=1 00:23:46.298 --rc genhtml_function_coverage=1 00:23:46.298 --rc genhtml_legend=1 00:23:46.298 --rc geninfo_all_blocks=1 00:23:46.298 --rc geninfo_unexecuted_blocks=1 00:23:46.298 00:23:46.298 ' 00:23:46.298 00:18:00 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:23:46.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.298 --rc genhtml_branch_coverage=1 00:23:46.298 --rc genhtml_function_coverage=1 00:23:46.298 --rc genhtml_legend=1 00:23:46.298 --rc geninfo_all_blocks=1 00:23:46.298 --rc geninfo_unexecuted_blocks=1 00:23:46.298 00:23:46.298 ' 00:23:46.298 00:18:00 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:23:46.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.298 --rc genhtml_branch_coverage=1 00:23:46.298 --rc genhtml_function_coverage=1 00:23:46.298 --rc genhtml_legend=1 00:23:46.298 --rc geninfo_all_blocks=1 00:23:46.298 --rc geninfo_unexecuted_blocks=1 00:23:46.298 00:23:46.298 ' 00:23:46.298 00:18:00 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:23:46.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.298 --rc genhtml_branch_coverage=1 00:23:46.298 --rc genhtml_function_coverage=1 00:23:46.298 --rc genhtml_legend=1 00:23:46.298 --rc geninfo_all_blocks=1 00:23:46.298 --rc geninfo_unexecuted_blocks=1 00:23:46.298 00:23:46.298 ' 00:23:46.298 00:18:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:46.298 00:18:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:46.298 00:18:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.298 00:18:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.298 00:18:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.298 00:18:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.298 00:18:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.298 00:18:00 -- paths/export.sh@5 -- $ export PATH 00:23:46.298 00:18:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.298 00:18:00 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:46.298 00:18:00 -- common/autobuild_common.sh@440 -- $ date +%s 00:23:46.298 00:18:00 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732753080.XXXXXX 00:23:46.298 00:18:00 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732753080.OISYhI 00:23:46.298 00:18:00 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:23:46.298 00:18:00 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:23:46.298 00:18:00 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:23:46.298 00:18:00 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:23:46.298 00:18:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:46.298 00:18:00 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:46.298 00:18:00 -- common/autobuild_common.sh@456 -- $ get_config_params 00:23:46.298 00:18:00 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:23:46.298 00:18:00 -- common/autotest_common.sh@10 -- $ set +x 00:23:46.299 00:18:00 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:23:46.299 00:18:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:46.299 00:18:00 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:46.299 00:18:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:46.299 00:18:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:23:46.299 00:18:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:46.299 00:18:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:46.299 00:18:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:46.299 00:18:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:46.299 00:18:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:46.299 00:18:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:46.299 + [[ -n 5736 ]] 00:23:46.299 + sudo kill 5736 00:23:46.307 [Pipeline] } 00:23:46.323 [Pipeline] // timeout 00:23:46.329 [Pipeline] } 00:23:46.343 [Pipeline] // stage 00:23:46.348 [Pipeline] } 00:23:46.363 [Pipeline] // catchError 00:23:46.372 [Pipeline] stage 00:23:46.375 [Pipeline] { (Stop VM) 00:23:46.388 [Pipeline] sh 00:23:46.667 + vagrant halt 00:23:49.197 ==> default: Halting domain... 00:23:54.493 [Pipeline] sh 00:23:54.775 + vagrant destroy -f 00:23:57.344 ==> default: Removing domain... 00:23:57.619 [Pipeline] sh 00:23:57.903 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:23:57.914 [Pipeline] } 00:23:57.929 [Pipeline] // stage 00:23:57.934 [Pipeline] } 00:23:57.949 [Pipeline] // dir 00:23:57.954 [Pipeline] } 00:23:57.969 [Pipeline] // wrap 00:23:57.976 [Pipeline] } 00:23:57.989 [Pipeline] // catchError 00:23:57.999 [Pipeline] stage 00:23:58.001 [Pipeline] { (Epilogue) 00:23:58.015 [Pipeline] sh 00:23:58.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:02.513 [Pipeline] catchError 00:24:02.515 [Pipeline] { 00:24:02.529 [Pipeline] sh 00:24:02.816 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:03.077 Artifacts sizes are good 00:24:03.089 [Pipeline] } 00:24:03.103 [Pipeline] // catchError 00:24:03.116 [Pipeline] archiveArtifacts 00:24:03.124 Archiving artifacts 00:24:03.228 [Pipeline] cleanWs 00:24:03.242 [WS-CLEANUP] Deleting project workspace... 00:24:03.242 [WS-CLEANUP] Deferred wipeout is used... 00:24:03.250 [WS-CLEANUP] done 00:24:03.252 [Pipeline] } 00:24:03.268 [Pipeline] // stage 00:24:03.273 [Pipeline] } 00:24:03.288 [Pipeline] // node 00:24:03.294 [Pipeline] End of Pipeline 00:24:03.330 Finished: SUCCESS