00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 975 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3642 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.086 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.098 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.112 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.113 > git config core.sparsecheckout # timeout=10 00:00:05.123 > git read-tree -mu HEAD # timeout=10 00:00:05.138 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.156 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.157 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.266 [Pipeline] Start of Pipeline 00:00:05.281 [Pipeline] library 00:00:05.282 Loading library shm_lib@master 00:00:05.282 Library shm_lib@master is cached. Copying from home. 00:00:05.300 [Pipeline] node 00:00:05.312 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.314 [Pipeline] { 00:00:05.322 [Pipeline] catchError 00:00:05.323 [Pipeline] { 00:00:05.332 [Pipeline] wrap 00:00:05.339 [Pipeline] { 00:00:05.346 [Pipeline] stage 00:00:05.347 [Pipeline] { (Prologue) 00:00:05.360 [Pipeline] echo 00:00:05.362 Node: VM-host-SM38 00:00:05.366 [Pipeline] cleanWs 00:00:05.376 [WS-CLEANUP] Deleting project workspace... 00:00:05.376 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.384 [WS-CLEANUP] done 00:00:05.598 [Pipeline] setCustomBuildProperty 00:00:05.661 [Pipeline] httpRequest 00:00:06.329 [Pipeline] echo 00:00:06.331 Sorcerer 10.211.164.20 is alive 00:00:06.340 [Pipeline] retry 00:00:06.342 [Pipeline] { 00:00:06.354 [Pipeline] httpRequest 00:00:06.358 HttpMethod: GET 00:00:06.358 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.359 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.361 Response Code: HTTP/1.1 200 OK 00:00:06.361 Success: Status code 200 is in the accepted range: 200,404 00:00:06.362 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.596 [Pipeline] } 00:00:07.612 [Pipeline] // retry 00:00:07.620 [Pipeline] sh 00:00:07.902 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.917 [Pipeline] httpRequest 00:00:08.475 [Pipeline] echo 00:00:08.476 Sorcerer 10.211.164.20 is alive 00:00:08.482 [Pipeline] retry 00:00:08.484 [Pipeline] { 00:00:08.495 [Pipeline] httpRequest 00:00:08.500 HttpMethod: GET 00:00:08.500 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.501 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.526 Response Code: HTTP/1.1 200 OK 00:00:08.527 Success: Status code 200 is in the accepted range: 200,404 00:00:08.527 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:30.033 [Pipeline] } 00:01:30.052 [Pipeline] // retry 00:01:30.060 [Pipeline] sh 00:01:30.349 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:32.902 [Pipeline] sh 00:01:33.182 + git -C spdk log --oneline -n5 00:01:33.182 c13c99a5e test: Various fixes for Fedora40 00:01:33.182 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:33.182 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:33.182 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:33.182 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:33.203 [Pipeline] withCredentials 00:01:33.220 > git --version # timeout=10 00:01:33.234 > git --version # 'git version 2.39.2' 00:01:33.254 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:33.256 [Pipeline] { 00:01:33.265 [Pipeline] retry 00:01:33.267 [Pipeline] { 00:01:33.281 [Pipeline] sh 00:01:33.567 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:33.581 [Pipeline] } 00:01:33.598 [Pipeline] // retry 00:01:33.603 [Pipeline] } 00:01:33.618 [Pipeline] // withCredentials 00:01:33.627 [Pipeline] httpRequest 00:01:33.936 [Pipeline] echo 00:01:33.938 Sorcerer 10.211.164.20 is alive 00:01:33.947 [Pipeline] retry 00:01:33.949 [Pipeline] { 00:01:33.965 [Pipeline] httpRequest 00:01:33.970 HttpMethod: GET 00:01:33.971 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.972 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.972 Response Code: HTTP/1.1 200 OK 00:01:33.973 Success: Status code 200 is in the accepted range: 200,404 00:01:33.973 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.231 [Pipeline] } 00:01:38.251 [Pipeline] // retry 00:01:38.259 [Pipeline] sh 00:01:38.544 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.937 [Pipeline] sh 00:01:40.223 + git -C dpdk log --oneline -n5 00:01:40.223 eeb0605f11 version: 23.11.0 00:01:40.223 238778122a doc: update release notes for 23.11 00:01:40.223 46aa6b3cfc doc: fix description of RSS features 00:01:40.223 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:40.223 7e421ae345 devtools: support skipping forbid rule check 00:01:40.243 [Pipeline] writeFile 00:01:40.257 [Pipeline] sh 00:01:40.547 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:40.561 [Pipeline] sh 00:01:40.847 + cat autorun-spdk.conf 00:01:40.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.847 SPDK_TEST_NVME=1 00:01:40.847 SPDK_TEST_FTL=1 00:01:40.847 SPDK_TEST_ISAL=1 00:01:40.847 SPDK_RUN_ASAN=1 00:01:40.847 SPDK_RUN_UBSAN=1 00:01:40.847 SPDK_TEST_XNVME=1 00:01:40.847 SPDK_TEST_NVME_FDP=1 00:01:40.847 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.847 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.847 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.856 RUN_NIGHTLY=1 00:01:40.859 [Pipeline] } 00:01:40.875 [Pipeline] // stage 00:01:40.892 [Pipeline] stage 00:01:40.894 [Pipeline] { (Run VM) 00:01:40.909 [Pipeline] sh 00:01:41.195 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.195 + echo 'Start stage prepare_nvme.sh' 00:01:41.195 Start stage prepare_nvme.sh 00:01:41.195 + [[ -n 6 ]] 00:01:41.195 + disk_prefix=ex6 00:01:41.195 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:41.195 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:41.195 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:41.195 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.195 ++ SPDK_TEST_NVME=1 00:01:41.195 ++ SPDK_TEST_FTL=1 00:01:41.195 ++ SPDK_TEST_ISAL=1 00:01:41.195 ++ SPDK_RUN_ASAN=1 00:01:41.195 ++ SPDK_RUN_UBSAN=1 00:01:41.195 ++ SPDK_TEST_XNVME=1 00:01:41.195 ++ SPDK_TEST_NVME_FDP=1 00:01:41.195 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.195 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.195 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.195 ++ RUN_NIGHTLY=1 00:01:41.195 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:41.195 + nvme_files=() 00:01:41.195 + declare -A nvme_files 00:01:41.195 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.195 + nvme_files['nvme.img']=5G 00:01:41.195 + nvme_files['nvme-cmb.img']=5G 00:01:41.195 + nvme_files['nvme-multi0.img']=4G 00:01:41.195 + nvme_files['nvme-multi1.img']=4G 00:01:41.195 + nvme_files['nvme-multi2.img']=4G 00:01:41.195 + nvme_files['nvme-openstack.img']=8G 00:01:41.195 + nvme_files['nvme-zns.img']=5G 00:01:41.195 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.195 + (( SPDK_TEST_FTL == 1 )) 00:01:41.195 + nvme_files["nvme-ftl.img"]=6G 00:01:41.195 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.195 + nvme_files["nvme-fdp.img"]=1G 00:01:41.195 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.195 + for nvme in "${!nvme_files[@]}" 00:01:41.195 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:41.195 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.195 + for nvme in "${!nvme_files[@]}" 00:01:41.195 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:41.195 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:41.195 + for nvme in "${!nvme_files[@]}" 00:01:41.195 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:41.456 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.456 + for nvme in "${!nvme_files[@]}" 00:01:41.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:41.456 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:41.456 + for nvme in "${!nvme_files[@]}" 00:01:41.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:41.456 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.456 + for nvme in "${!nvme_files[@]}" 00:01:41.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:41.456 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.456 + for nvme in "${!nvme_files[@]}" 00:01:41.456 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:41.456 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.457 + for nvme in "${!nvme_files[@]}" 00:01:41.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:41.716 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:41.716 + for nvme in "${!nvme_files[@]}" 00:01:41.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:41.716 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.716 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:41.716 + echo 'End stage prepare_nvme.sh' 00:01:41.716 End stage prepare_nvme.sh 00:01:41.727 [Pipeline] sh 00:01:42.005 + DISTRO=fedora39 00:01:42.005 + CPUS=10 00:01:42.005 + RAM=12288 00:01:42.005 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.005 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:42.005 00:01:42.005 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:42.005 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:42.005 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:42.005 HELP=0 00:01:42.005 DRY_RUN=0 00:01:42.005 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:42.005 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:42.005 NVME_AUTO_CREATE=0 00:01:42.005 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:42.005 NVME_CMB=,,,, 00:01:42.005 NVME_PMR=,,,, 00:01:42.005 NVME_ZNS=,,,, 00:01:42.005 NVME_MS=true,,,, 00:01:42.005 NVME_FDP=,,,on, 00:01:42.005 SPDK_VAGRANT_DISTRO=fedora39 00:01:42.005 SPDK_VAGRANT_VMCPU=10 00:01:42.005 SPDK_VAGRANT_VMRAM=12288 00:01:42.005 SPDK_VAGRANT_PROVIDER=libvirt 00:01:42.005 SPDK_VAGRANT_HTTP_PROXY= 00:01:42.005 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:42.005 SPDK_OPENSTACK_NETWORK=0 00:01:42.005 VAGRANT_PACKAGE_BOX=0 00:01:42.005 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:42.005 FORCE_DISTRO=true 00:01:42.006 VAGRANT_BOX_VERSION= 00:01:42.006 EXTRA_VAGRANTFILES= 00:01:42.006 NIC_MODEL=e1000 00:01:42.006 00:01:42.006 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:42.006 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:44.629 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.889 ==> default: Creating image (snapshot of base box volume). 00:01:44.889 ==> default: Creating domain with the following settings... 00:01:44.889 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731941288_112c824658c6ee270bc2 00:01:44.889 ==> default: -- Domain type: kvm 00:01:44.889 ==> default: -- Cpus: 10 00:01:44.889 ==> default: -- Feature: acpi 00:01:44.889 ==> default: -- Feature: apic 00:01:44.889 ==> default: -- Feature: pae 00:01:44.889 ==> default: -- Memory: 12288M 00:01:44.889 ==> default: -- Memory Backing: hugepages: 00:01:44.889 ==> default: -- Management MAC: 00:01:44.889 ==> default: -- Loader: 00:01:44.889 ==> default: -- Nvram: 00:01:44.889 ==> default: -- Base box: spdk/fedora39 00:01:44.889 ==> default: -- Storage pool: default 00:01:44.889 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731941288_112c824658c6ee270bc2.img (20G) 00:01:44.889 ==> default: -- Volume Cache: default 00:01:44.889 ==> default: -- Kernel: 00:01:44.889 ==> default: -- Initrd: 00:01:45.150 ==> default: -- Graphics Type: vnc 00:01:45.150 ==> default: -- Graphics Port: -1 00:01:45.150 ==> default: -- Graphics IP: 127.0.0.1 00:01:45.150 ==> default: -- Graphics Password: Not defined 00:01:45.150 ==> default: -- Video Type: cirrus 00:01:45.150 ==> default: -- Video VRAM: 9216 00:01:45.150 ==> default: -- Sound Type: 00:01:45.150 ==> default: -- Keymap: en-us 00:01:45.150 ==> default: -- TPM Path: 00:01:45.150 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:45.150 ==> default: -- Command line args: 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:45.150 ==> default: -> value=-drive, 00:01:45.150 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:45.150 ==> default: -> value=-device, 00:01:45.150 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.150 ==> default: Creating shared folders metadata... 00:01:45.150 ==> default: Starting domain. 00:01:47.061 ==> default: Waiting for domain to get an IP address... 00:02:05.180 ==> default: Waiting for SSH to become available... 00:02:05.180 ==> default: Configuring and enabling network interfaces... 00:02:09.389 default: SSH address: 192.168.121.151:22 00:02:09.389 default: SSH username: vagrant 00:02:09.389 default: SSH auth method: private key 00:02:11.333 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.911 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:24.472 ==> default: Mounting SSHFS shared folder... 00:02:25.416 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.416 ==> default: Checking Mount.. 00:02:26.819 ==> default: Folder Successfully Mounted! 00:02:26.819 00:02:26.819 SUCCESS! 00:02:26.819 00:02:26.819 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:26.819 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:26.819 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:26.819 00:02:26.827 [Pipeline] } 00:02:26.841 [Pipeline] // stage 00:02:26.848 [Pipeline] dir 00:02:26.849 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:26.850 [Pipeline] { 00:02:26.862 [Pipeline] catchError 00:02:26.864 [Pipeline] { 00:02:26.875 [Pipeline] sh 00:02:27.152 + vagrant ssh-config --host vagrant 00:02:27.152 + sed -ne '/^Host/,$p' 00:02:27.152 + tee ssh_conf 00:02:29.676 Host vagrant 00:02:29.676 HostName 192.168.121.151 00:02:29.676 User vagrant 00:02:29.676 Port 22 00:02:29.676 UserKnownHostsFile /dev/null 00:02:29.676 StrictHostKeyChecking no 00:02:29.676 PasswordAuthentication no 00:02:29.676 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:29.676 IdentitiesOnly yes 00:02:29.676 LogLevel FATAL 00:02:29.676 ForwardAgent yes 00:02:29.676 ForwardX11 yes 00:02:29.676 00:02:29.688 [Pipeline] withEnv 00:02:29.691 [Pipeline] { 00:02:29.705 [Pipeline] sh 00:02:29.981 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:29.981 source /etc/os-release 00:02:29.981 [[ -e /image.version ]] && img=$(< /image.version) 00:02:29.981 # Minimal, systemd-like check. 00:02:29.981 if [[ -e /.dockerenv ]]; then 00:02:29.981 # Clear garbage from the node'\''s name: 00:02:29.981 # agt-er_autotest_547-896 -> autotest_547-896 00:02:29.981 # $HOSTNAME is the actual container id 00:02:29.981 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:29.981 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:29.981 # We can assume this is a mount from a host where container is running, 00:02:29.981 # so fetch its hostname to easily identify the target swarm worker. 00:02:29.981 container="$(< /etc/hostname) ($agent)" 00:02:29.981 else 00:02:29.981 # Fallback 00:02:29.981 container=$agent 00:02:29.981 fi 00:02:29.981 fi 00:02:29.981 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:29.981 ' 00:02:30.247 [Pipeline] } 00:02:30.263 [Pipeline] // withEnv 00:02:30.271 [Pipeline] setCustomBuildProperty 00:02:30.286 [Pipeline] stage 00:02:30.289 [Pipeline] { (Tests) 00:02:30.306 [Pipeline] sh 00:02:30.584 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:30.855 [Pipeline] sh 00:02:31.180 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.193 [Pipeline] timeout 00:02:31.193 Timeout set to expire in 50 min 00:02:31.195 [Pipeline] { 00:02:31.208 [Pipeline] sh 00:02:31.486 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:32.053 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:32.065 [Pipeline] sh 00:02:32.343 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:32.357 [Pipeline] sh 00:02:32.635 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:32.653 [Pipeline] sh 00:02:32.932 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:32.932 ++ readlink -f spdk_repo 00:02:32.932 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:32.932 + [[ -n /home/vagrant/spdk_repo ]] 00:02:32.932 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:32.932 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:32.932 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:32.932 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:32.932 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:32.932 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:32.932 + cd /home/vagrant/spdk_repo 00:02:32.932 + source /etc/os-release 00:02:32.932 ++ NAME='Fedora Linux' 00:02:32.932 ++ VERSION='39 (Cloud Edition)' 00:02:32.932 ++ ID=fedora 00:02:32.932 ++ VERSION_ID=39 00:02:32.932 ++ VERSION_CODENAME= 00:02:32.932 ++ PLATFORM_ID=platform:f39 00:02:32.932 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:32.932 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:32.932 ++ LOGO=fedora-logo-icon 00:02:32.932 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:32.932 ++ HOME_URL=https://fedoraproject.org/ 00:02:32.932 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:32.932 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:32.932 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:32.932 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:32.932 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:32.932 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:32.932 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:32.932 ++ SUPPORT_END=2024-11-12 00:02:32.932 ++ VARIANT='Cloud Edition' 00:02:32.932 ++ VARIANT_ID=cloud 00:02:32.932 + uname -a 00:02:32.932 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:32.932 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:33.191 Hugepages 00:02:33.191 node hugesize free / total 00:02:33.191 node0 1048576kB 0 / 0 00:02:33.191 node0 2048kB 0 / 0 00:02:33.191 00:02:33.191 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:33.191 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:33.191 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:33.191 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:33.191 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:33.191 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:33.191 + rm -f /tmp/spdk-ld-path 00:02:33.191 + source autorun-spdk.conf 00:02:33.191 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.191 ++ SPDK_TEST_NVME=1 00:02:33.191 ++ SPDK_TEST_FTL=1 00:02:33.191 ++ SPDK_TEST_ISAL=1 00:02:33.191 ++ SPDK_RUN_ASAN=1 00:02:33.191 ++ SPDK_RUN_UBSAN=1 00:02:33.191 ++ SPDK_TEST_XNVME=1 00:02:33.191 ++ SPDK_TEST_NVME_FDP=1 00:02:33.191 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:33.191 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:33.191 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.191 ++ RUN_NIGHTLY=1 00:02:33.191 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:33.191 + [[ -n '' ]] 00:02:33.191 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:33.191 + for M in /var/spdk/build-*-manifest.txt 00:02:33.191 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:33.191 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:33.191 + for M in /var/spdk/build-*-manifest.txt 00:02:33.191 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:33.191 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:33.191 + for M in /var/spdk/build-*-manifest.txt 00:02:33.191 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:33.191 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:33.191 ++ uname 00:02:33.191 + [[ Linux == \L\i\n\u\x ]] 00:02:33.191 + sudo dmesg -T 00:02:33.451 + sudo dmesg --clear 00:02:33.451 + dmesg_pid=5732 00:02:33.451 + sudo dmesg -Tw 00:02:33.451 + [[ Fedora Linux == FreeBSD ]] 00:02:33.451 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:33.451 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:33.451 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:33.451 + [[ -x /usr/src/fio-static/fio ]] 00:02:33.451 + export FIO_BIN=/usr/src/fio-static/fio 00:02:33.451 + FIO_BIN=/usr/src/fio-static/fio 00:02:33.451 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:33.451 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:33.451 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:33.451 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:33.451 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:33.451 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:33.451 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:33.451 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:33.451 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:33.451 Test configuration: 00:02:33.451 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.451 SPDK_TEST_NVME=1 00:02:33.451 SPDK_TEST_FTL=1 00:02:33.451 SPDK_TEST_ISAL=1 00:02:33.451 SPDK_RUN_ASAN=1 00:02:33.451 SPDK_RUN_UBSAN=1 00:02:33.451 SPDK_TEST_XNVME=1 00:02:33.451 SPDK_TEST_NVME_FDP=1 00:02:33.451 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:33.451 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:33.451 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.451 RUN_NIGHTLY=1 14:48:56 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:33.451 14:48:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:33.451 14:48:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:33.451 14:48:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.451 14:48:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.451 14:48:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.451 14:48:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.451 14:48:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.451 14:48:56 -- paths/export.sh@5 -- $ export PATH 00:02:33.451 14:48:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.451 14:48:56 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:33.451 14:48:56 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:33.451 14:48:56 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731941336.XXXXXX 00:02:33.451 14:48:56 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731941336.3kKT0j 00:02:33.451 14:48:56 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:33.451 14:48:56 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:33.451 14:48:56 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:33.451 14:48:56 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:33.451 14:48:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:33.451 14:48:56 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:33.451 14:48:56 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:33.451 14:48:56 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 14:48:56 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:33.451 14:48:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:33.451 14:48:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:33.451 14:48:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:33.451 14:48:56 -- spdk/autobuild.sh@16 -- $ date -u 00:02:33.451 Mon Nov 18 02:48:56 PM UTC 2024 00:02:33.451 14:48:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:33.451 LTS-67-gc13c99a5e 00:02:33.451 14:48:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:33.451 14:48:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:33.451 14:48:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:33.451 14:48:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 ************************************ 00:02:33.451 START TEST asan 00:02:33.451 ************************************ 00:02:33.451 using asan 00:02:33.451 14:48:56 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:33.451 00:02:33.451 real 0m0.000s 00:02:33.451 user 0m0.000s 00:02:33.451 sys 0m0.000s 00:02:33.451 14:48:56 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:33.451 ************************************ 00:02:33.451 END TEST asan 00:02:33.451 ************************************ 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 14:48:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:33.451 14:48:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:33.451 14:48:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:33.451 14:48:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 ************************************ 00:02:33.451 START TEST ubsan 00:02:33.451 ************************************ 00:02:33.451 using ubsan 00:02:33.451 ************************************ 00:02:33.451 END TEST ubsan 00:02:33.451 ************************************ 00:02:33.451 14:48:56 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:33.451 00:02:33.451 real 0m0.000s 00:02:33.451 user 0m0.000s 00:02:33.451 sys 0m0.000s 00:02:33.451 14:48:56 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 14:48:56 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:33.451 14:48:56 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:33.451 14:48:56 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:33.451 14:48:56 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:33.451 14:48:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.451 14:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.451 ************************************ 00:02:33.451 START TEST build_native_dpdk 00:02:33.451 ************************************ 00:02:33.451 14:48:56 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:33.451 14:48:56 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:33.451 14:48:56 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:33.451 14:48:56 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:33.451 14:48:56 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:33.451 14:48:56 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:33.451 14:48:56 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:33.451 14:48:56 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:33.452 14:48:56 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:33.452 14:48:56 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:33.452 14:48:56 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:33.452 14:48:56 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:33.452 14:48:56 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:33.452 14:48:56 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:33.452 14:48:56 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:33.452 14:48:56 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:33.452 14:48:56 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:33.452 14:48:56 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:33.452 eeb0605f11 version: 23.11.0 00:02:33.452 238778122a doc: update release notes for 23.11 00:02:33.452 46aa6b3cfc doc: fix description of RSS features 00:02:33.452 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:33.452 7e421ae345 devtools: support skipping forbid rule check 00:02:33.452 14:48:56 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:33.452 14:48:56 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:33.452 14:48:56 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:33.452 14:48:56 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:33.452 14:48:56 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:33.452 14:48:56 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:33.452 14:48:56 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:33.452 14:48:56 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:33.452 14:48:56 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:33.452 14:48:56 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:33.452 14:48:57 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:33.452 14:48:57 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:33.452 14:48:57 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:33.452 14:48:57 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:33.452 14:48:57 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:33.452 14:48:57 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:33.452 14:48:57 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:33.452 14:48:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:33.452 14:48:57 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:33.452 14:48:57 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:33.452 14:48:57 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:33.452 14:48:57 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:33.452 14:48:57 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:33.452 14:48:57 -- scripts/common.sh@343 -- $ case "$op" in 00:02:33.452 14:48:57 -- scripts/common.sh@344 -- $ : 1 00:02:33.452 14:48:57 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:33.452 14:48:57 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.452 14:48:57 -- scripts/common.sh@364 -- $ decimal 23 00:02:33.452 14:48:57 -- scripts/common.sh@352 -- $ local d=23 00:02:33.452 14:48:57 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:33.452 14:48:57 -- scripts/common.sh@354 -- $ echo 23 00:02:33.452 14:48:57 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:33.452 14:48:57 -- scripts/common.sh@365 -- $ decimal 21 00:02:33.452 14:48:57 -- scripts/common.sh@352 -- $ local d=21 00:02:33.452 14:48:57 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:33.452 14:48:57 -- scripts/common.sh@354 -- $ echo 21 00:02:33.452 14:48:57 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:33.452 14:48:57 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:33.452 14:48:57 -- scripts/common.sh@366 -- $ return 1 00:02:33.452 14:48:57 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:33.452 patching file config/rte_config.h 00:02:33.452 Hunk #1 succeeded at 60 (offset 1 line). 00:02:33.452 14:48:57 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:33.452 14:48:57 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:33.452 14:48:57 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:33.452 14:48:57 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:33.452 14:48:57 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:33.452 14:48:57 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:33.452 14:48:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:33.452 14:48:57 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:33.452 14:48:57 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:33.452 14:48:57 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:33.452 14:48:57 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:33.452 14:48:57 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:33.452 14:48:57 -- scripts/common.sh@343 -- $ case "$op" in 00:02:33.452 14:48:57 -- scripts/common.sh@344 -- $ : 1 00:02:33.452 14:48:57 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:33.452 14:48:57 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.452 14:48:57 -- scripts/common.sh@364 -- $ decimal 23 00:02:33.452 14:48:57 -- scripts/common.sh@352 -- $ local d=23 00:02:33.452 14:48:57 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:33.452 14:48:57 -- scripts/common.sh@354 -- $ echo 23 00:02:33.452 14:48:57 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:33.452 14:48:57 -- scripts/common.sh@365 -- $ decimal 24 00:02:33.452 14:48:57 -- scripts/common.sh@352 -- $ local d=24 00:02:33.452 14:48:57 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:33.452 14:48:57 -- scripts/common.sh@354 -- $ echo 24 00:02:33.452 14:48:57 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:33.452 14:48:57 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:33.452 14:48:57 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:33.452 14:48:57 -- scripts/common.sh@367 -- $ return 0 00:02:33.452 14:48:57 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:33.452 patching file lib/pcapng/rte_pcapng.c 00:02:33.452 14:48:57 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:33.452 14:48:57 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:33.452 14:48:57 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:33.452 14:48:57 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:33.452 14:48:57 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.660 The Meson build system 00:02:37.661 Version: 1.5.0 00:02:37.661 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.661 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.661 Build type: native build 00:02:37.661 Program cat found: YES (/usr/bin/cat) 00:02:37.661 Project name: DPDK 00:02:37.661 Project version: 23.11.0 00:02:37.661 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.661 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:37.661 Host machine cpu family: x86_64 00:02:37.661 Host machine cpu: x86_64 00:02:37.661 Message: ## Building in Developer Mode ## 00:02:37.661 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.661 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.661 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.661 Program python3 found: YES (/usr/bin/python3) 00:02:37.661 Program cat found: YES (/usr/bin/cat) 00:02:37.661 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.661 Compiler for C supports arguments -march=native: YES 00:02:37.661 Checking for size of "void *" : 8 00:02:37.661 Checking for size of "void *" : 8 (cached) 00:02:37.661 Library m found: YES 00:02:37.661 Library numa found: YES 00:02:37.661 Has header "numaif.h" : YES 00:02:37.661 Library fdt found: NO 00:02:37.661 Library execinfo found: NO 00:02:37.661 Has header "execinfo.h" : YES 00:02:37.661 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.661 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.661 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.661 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.661 Run-time dependency openssl found: YES 3.1.1 00:02:37.661 Run-time dependency libpcap found: YES 1.10.4 00:02:37.661 Has header "pcap.h" with dependency libpcap: YES 00:02:37.661 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.661 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.661 Compiler for C supports arguments -Wformat: YES 00:02:37.661 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.661 Compiler for C supports arguments -Wformat-security: NO 00:02:37.661 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.661 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.661 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.661 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.661 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.661 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.661 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.661 Compiler for C supports arguments -Wundef: YES 00:02:37.661 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.661 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.661 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.661 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.661 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.661 Program objdump found: YES (/usr/bin/objdump) 00:02:37.661 Compiler for C supports arguments -mavx512f: YES 00:02:37.661 Checking if "AVX512 checking" compiles: YES 00:02:37.661 Fetching value of define "__SSE4_2__" : 1 00:02:37.661 Fetching value of define "__AES__" : 1 00:02:37.661 Fetching value of define "__AVX__" : 1 00:02:37.661 Fetching value of define "__AVX2__" : 1 00:02:37.661 Fetching value of define "__AVX512BW__" : 1 00:02:37.661 Fetching value of define "__AVX512CD__" : 1 00:02:37.661 Fetching value of define "__AVX512DQ__" : 1 00:02:37.661 Fetching value of define "__AVX512F__" : 1 00:02:37.661 Fetching value of define "__AVX512VL__" : 1 00:02:37.661 Fetching value of define "__PCLMUL__" : 1 00:02:37.661 Fetching value of define "__RDRND__" : 1 00:02:37.661 Fetching value of define "__RDSEED__" : 1 00:02:37.661 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:37.661 Fetching value of define "__znver1__" : (undefined) 00:02:37.661 Fetching value of define "__znver2__" : (undefined) 00:02:37.661 Fetching value of define "__znver3__" : (undefined) 00:02:37.661 Fetching value of define "__znver4__" : (undefined) 00:02:37.661 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.661 Message: lib/log: Defining dependency "log" 00:02:37.661 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.661 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.661 Checking for function "getentropy" : NO 00:02:37.661 Message: lib/eal: Defining dependency "eal" 00:02:37.661 Message: lib/ring: Defining dependency "ring" 00:02:37.661 Message: lib/rcu: Defining dependency "rcu" 00:02:37.661 Message: lib/mempool: Defining dependency "mempool" 00:02:37.661 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.661 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.661 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:37.661 Compiler for C supports arguments -mpclmul: YES 00:02:37.661 Compiler for C supports arguments -maes: YES 00:02:37.661 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.661 Compiler for C supports arguments -mavx512bw: YES 00:02:37.661 Compiler for C supports arguments -mavx512dq: YES 00:02:37.661 Compiler for C supports arguments -mavx512vl: YES 00:02:37.661 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.661 Compiler for C supports arguments -mavx2: YES 00:02:37.661 Compiler for C supports arguments -mavx: YES 00:02:37.661 Message: lib/net: Defining dependency "net" 00:02:37.661 Message: lib/meter: Defining dependency "meter" 00:02:37.661 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.661 Message: lib/pci: Defining dependency "pci" 00:02:37.661 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.661 Message: lib/metrics: Defining dependency "metrics" 00:02:37.661 Message: lib/hash: Defining dependency "hash" 00:02:37.661 Message: lib/timer: Defining dependency "timer" 00:02:37.661 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.661 Message: lib/acl: Defining dependency "acl" 00:02:37.661 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.661 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.661 Run-time dependency libelf found: YES 0.191 00:02:37.661 Message: lib/bpf: Defining dependency "bpf" 00:02:37.661 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.661 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.661 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.661 Message: lib/distributor: Defining dependency "distributor" 00:02:37.661 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.661 Message: lib/efd: Defining dependency "efd" 00:02:37.661 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.661 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:37.661 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.661 Message: lib/gro: Defining dependency "gro" 00:02:37.661 Message: lib/gso: Defining dependency "gso" 00:02:37.661 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.661 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.661 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.661 Message: lib/lpm: Defining dependency "lpm" 00:02:37.661 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512IFMA__" : 1 00:02:37.661 Message: lib/member: Defining dependency "member" 00:02:37.661 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.661 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.661 Message: lib/power: Defining dependency "power" 00:02:37.661 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.661 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.661 Message: lib/mldev: Defining dependency "mldev" 00:02:37.661 Message: lib/rib: Defining dependency "rib" 00:02:37.661 Message: lib/reorder: Defining dependency "reorder" 00:02:37.661 Message: lib/sched: Defining dependency "sched" 00:02:37.661 Message: lib/security: Defining dependency "security" 00:02:37.661 Message: lib/stack: Defining dependency "stack" 00:02:37.661 Has header "linux/userfaultfd.h" : YES 00:02:37.661 Has header "linux/vduse.h" : YES 00:02:37.661 Message: lib/vhost: Defining dependency "vhost" 00:02:37.661 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.661 Message: lib/pdcp: Defining dependency "pdcp" 00:02:37.661 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.661 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.661 Message: lib/fib: Defining dependency "fib" 00:02:37.661 Message: lib/port: Defining dependency "port" 00:02:37.661 Message: lib/pdump: Defining dependency "pdump" 00:02:37.661 Message: lib/table: Defining dependency "table" 00:02:37.661 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.661 Message: lib/graph: Defining dependency "graph" 00:02:37.661 Message: lib/node: Defining dependency "node" 00:02:37.661 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.661 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.661 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.661 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.036 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.036 Compiler for C supports arguments -Wno-unused-value: YES 00:02:39.036 Compiler for C supports arguments -Wno-format: YES 00:02:39.036 Compiler for C supports arguments -Wno-format-security: YES 00:02:39.036 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.036 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.036 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.036 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.036 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.036 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.036 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.036 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.036 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.037 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.037 Has header "sys/epoll.h" : YES 00:02:39.037 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.037 Configuring doxy-api-html.conf using configuration 00:02:39.037 Configuring doxy-api-man.conf using configuration 00:02:39.037 Program mandb found: YES (/usr/bin/mandb) 00:02:39.037 Program sphinx-build found: NO 00:02:39.037 Configuring rte_build_config.h using configuration 00:02:39.037 Message: 00:02:39.037 ================= 00:02:39.037 Applications Enabled 00:02:39.037 ================= 00:02:39.037 00:02:39.037 apps: 00:02:39.037 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:39.037 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:39.037 test-pmd, test-regex, test-sad, test-security-perf, 00:02:39.037 00:02:39.037 Message: 00:02:39.037 ================= 00:02:39.037 Libraries Enabled 00:02:39.037 ================= 00:02:39.037 00:02:39.037 libs: 00:02:39.037 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.037 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:39.037 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:39.037 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:39.037 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:39.037 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:39.037 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:39.037 00:02:39.037 00:02:39.037 Message: 00:02:39.037 =============== 00:02:39.037 Drivers Enabled 00:02:39.037 =============== 00:02:39.037 00:02:39.037 common: 00:02:39.037 00:02:39.037 bus: 00:02:39.037 pci, vdev, 00:02:39.037 mempool: 00:02:39.037 ring, 00:02:39.037 dma: 00:02:39.037 00:02:39.037 net: 00:02:39.037 i40e, 00:02:39.037 raw: 00:02:39.037 00:02:39.037 crypto: 00:02:39.037 00:02:39.037 compress: 00:02:39.037 00:02:39.037 regex: 00:02:39.037 00:02:39.037 ml: 00:02:39.037 00:02:39.037 vdpa: 00:02:39.037 00:02:39.037 event: 00:02:39.037 00:02:39.037 baseband: 00:02:39.037 00:02:39.037 gpu: 00:02:39.037 00:02:39.037 00:02:39.037 Message: 00:02:39.037 ================= 00:02:39.037 Content Skipped 00:02:39.037 ================= 00:02:39.037 00:02:39.037 apps: 00:02:39.037 00:02:39.037 libs: 00:02:39.037 00:02:39.037 drivers: 00:02:39.037 common/cpt: not in enabled drivers build config 00:02:39.037 common/dpaax: not in enabled drivers build config 00:02:39.037 common/iavf: not in enabled drivers build config 00:02:39.037 common/idpf: not in enabled drivers build config 00:02:39.037 common/mvep: not in enabled drivers build config 00:02:39.037 common/octeontx: not in enabled drivers build config 00:02:39.037 bus/auxiliary: not in enabled drivers build config 00:02:39.037 bus/cdx: not in enabled drivers build config 00:02:39.037 bus/dpaa: not in enabled drivers build config 00:02:39.037 bus/fslmc: not in enabled drivers build config 00:02:39.037 bus/ifpga: not in enabled drivers build config 00:02:39.037 bus/platform: not in enabled drivers build config 00:02:39.037 bus/vmbus: not in enabled drivers build config 00:02:39.037 common/cnxk: not in enabled drivers build config 00:02:39.037 common/mlx5: not in enabled drivers build config 00:02:39.037 common/nfp: not in enabled drivers build config 00:02:39.037 common/qat: not in enabled drivers build config 00:02:39.037 common/sfc_efx: not in enabled drivers build config 00:02:39.037 mempool/bucket: not in enabled drivers build config 00:02:39.037 mempool/cnxk: not in enabled drivers build config 00:02:39.037 mempool/dpaa: not in enabled drivers build config 00:02:39.037 mempool/dpaa2: not in enabled drivers build config 00:02:39.037 mempool/octeontx: not in enabled drivers build config 00:02:39.037 mempool/stack: not in enabled drivers build config 00:02:39.037 dma/cnxk: not in enabled drivers build config 00:02:39.037 dma/dpaa: not in enabled drivers build config 00:02:39.037 dma/dpaa2: not in enabled drivers build config 00:02:39.037 dma/hisilicon: not in enabled drivers build config 00:02:39.037 dma/idxd: not in enabled drivers build config 00:02:39.037 dma/ioat: not in enabled drivers build config 00:02:39.037 dma/skeleton: not in enabled drivers build config 00:02:39.037 net/af_packet: not in enabled drivers build config 00:02:39.037 net/af_xdp: not in enabled drivers build config 00:02:39.037 net/ark: not in enabled drivers build config 00:02:39.037 net/atlantic: not in enabled drivers build config 00:02:39.037 net/avp: not in enabled drivers build config 00:02:39.037 net/axgbe: not in enabled drivers build config 00:02:39.037 net/bnx2x: not in enabled drivers build config 00:02:39.037 net/bnxt: not in enabled drivers build config 00:02:39.037 net/bonding: not in enabled drivers build config 00:02:39.037 net/cnxk: not in enabled drivers build config 00:02:39.037 net/cpfl: not in enabled drivers build config 00:02:39.037 net/cxgbe: not in enabled drivers build config 00:02:39.037 net/dpaa: not in enabled drivers build config 00:02:39.037 net/dpaa2: not in enabled drivers build config 00:02:39.037 net/e1000: not in enabled drivers build config 00:02:39.037 net/ena: not in enabled drivers build config 00:02:39.037 net/enetc: not in enabled drivers build config 00:02:39.037 net/enetfec: not in enabled drivers build config 00:02:39.037 net/enic: not in enabled drivers build config 00:02:39.037 net/failsafe: not in enabled drivers build config 00:02:39.037 net/fm10k: not in enabled drivers build config 00:02:39.037 net/gve: not in enabled drivers build config 00:02:39.037 net/hinic: not in enabled drivers build config 00:02:39.037 net/hns3: not in enabled drivers build config 00:02:39.037 net/iavf: not in enabled drivers build config 00:02:39.037 net/ice: not in enabled drivers build config 00:02:39.037 net/idpf: not in enabled drivers build config 00:02:39.037 net/igc: not in enabled drivers build config 00:02:39.037 net/ionic: not in enabled drivers build config 00:02:39.037 net/ipn3ke: not in enabled drivers build config 00:02:39.037 net/ixgbe: not in enabled drivers build config 00:02:39.037 net/mana: not in enabled drivers build config 00:02:39.037 net/memif: not in enabled drivers build config 00:02:39.037 net/mlx4: not in enabled drivers build config 00:02:39.037 net/mlx5: not in enabled drivers build config 00:02:39.037 net/mvneta: not in enabled drivers build config 00:02:39.037 net/mvpp2: not in enabled drivers build config 00:02:39.037 net/netvsc: not in enabled drivers build config 00:02:39.037 net/nfb: not in enabled drivers build config 00:02:39.037 net/nfp: not in enabled drivers build config 00:02:39.037 net/ngbe: not in enabled drivers build config 00:02:39.037 net/null: not in enabled drivers build config 00:02:39.037 net/octeontx: not in enabled drivers build config 00:02:39.037 net/octeon_ep: not in enabled drivers build config 00:02:39.037 net/pcap: not in enabled drivers build config 00:02:39.037 net/pfe: not in enabled drivers build config 00:02:39.037 net/qede: not in enabled drivers build config 00:02:39.037 net/ring: not in enabled drivers build config 00:02:39.037 net/sfc: not in enabled drivers build config 00:02:39.037 net/softnic: not in enabled drivers build config 00:02:39.037 net/tap: not in enabled drivers build config 00:02:39.037 net/thunderx: not in enabled drivers build config 00:02:39.037 net/txgbe: not in enabled drivers build config 00:02:39.037 net/vdev_netvsc: not in enabled drivers build config 00:02:39.037 net/vhost: not in enabled drivers build config 00:02:39.037 net/virtio: not in enabled drivers build config 00:02:39.037 net/vmxnet3: not in enabled drivers build config 00:02:39.037 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.037 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.037 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.037 raw/ifpga: not in enabled drivers build config 00:02:39.037 raw/ntb: not in enabled drivers build config 00:02:39.037 raw/skeleton: not in enabled drivers build config 00:02:39.037 crypto/armv8: not in enabled drivers build config 00:02:39.037 crypto/bcmfs: not in enabled drivers build config 00:02:39.037 crypto/caam_jr: not in enabled drivers build config 00:02:39.037 crypto/ccp: not in enabled drivers build config 00:02:39.037 crypto/cnxk: not in enabled drivers build config 00:02:39.037 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.037 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.037 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.037 crypto/mlx5: not in enabled drivers build config 00:02:39.037 crypto/mvsam: not in enabled drivers build config 00:02:39.037 crypto/nitrox: not in enabled drivers build config 00:02:39.037 crypto/null: not in enabled drivers build config 00:02:39.037 crypto/octeontx: not in enabled drivers build config 00:02:39.037 crypto/openssl: not in enabled drivers build config 00:02:39.037 crypto/scheduler: not in enabled drivers build config 00:02:39.037 crypto/uadk: not in enabled drivers build config 00:02:39.037 crypto/virtio: not in enabled drivers build config 00:02:39.037 compress/isal: not in enabled drivers build config 00:02:39.037 compress/mlx5: not in enabled drivers build config 00:02:39.037 compress/octeontx: not in enabled drivers build config 00:02:39.037 compress/zlib: not in enabled drivers build config 00:02:39.037 regex/mlx5: not in enabled drivers build config 00:02:39.037 regex/cn9k: not in enabled drivers build config 00:02:39.037 ml/cnxk: not in enabled drivers build config 00:02:39.037 vdpa/ifc: not in enabled drivers build config 00:02:39.037 vdpa/mlx5: not in enabled drivers build config 00:02:39.037 vdpa/nfp: not in enabled drivers build config 00:02:39.037 vdpa/sfc: not in enabled drivers build config 00:02:39.037 event/cnxk: not in enabled drivers build config 00:02:39.037 event/dlb2: not in enabled drivers build config 00:02:39.037 event/dpaa: not in enabled drivers build config 00:02:39.037 event/dpaa2: not in enabled drivers build config 00:02:39.038 event/dsw: not in enabled drivers build config 00:02:39.038 event/opdl: not in enabled drivers build config 00:02:39.038 event/skeleton: not in enabled drivers build config 00:02:39.038 event/sw: not in enabled drivers build config 00:02:39.038 event/octeontx: not in enabled drivers build config 00:02:39.038 baseband/acc: not in enabled drivers build config 00:02:39.038 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.038 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.038 baseband/la12xx: not in enabled drivers build config 00:02:39.038 baseband/null: not in enabled drivers build config 00:02:39.038 baseband/turbo_sw: not in enabled drivers build config 00:02:39.038 gpu/cuda: not in enabled drivers build config 00:02:39.038 00:02:39.038 00:02:39.038 Build targets in project: 215 00:02:39.038 00:02:39.038 DPDK 23.11.0 00:02:39.038 00:02:39.038 User defined options 00:02:39.038 libdir : lib 00:02:39.038 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.038 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.038 c_link_args : 00:02:39.038 enable_docs : false 00:02:39.038 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.038 enable_kmods : false 00:02:39.038 machine : native 00:02:39.038 tests : false 00:02:39.038 00:02:39.038 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.038 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.296 14:49:02 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.296 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.296 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.296 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.296 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.296 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.296 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.296 [6/705] Linking static target lib/librte_kvargs.a 00:02:39.296 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.296 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.296 [9/705] Linking static target lib/librte_log.a 00:02:39.554 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.554 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.554 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.554 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.554 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.554 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.812 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.812 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.812 [18/705] Linking target lib/librte_log.so.24.0 00:02:39.812 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.812 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.812 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.070 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.070 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.070 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.070 [25/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:40.070 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.070 [27/705] Linking target lib/librte_kvargs.so.24.0 00:02:40.070 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.070 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.070 [30/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.070 [31/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:40.328 [32/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.328 [33/705] Linking static target lib/librte_telemetry.a 00:02:40.328 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.328 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.328 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.328 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.328 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.328 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.328 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.328 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.586 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.586 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.586 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.586 [45/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.586 [46/705] Linking target lib/librte_telemetry.so.24.0 00:02:40.586 [47/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:40.844 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.844 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.844 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.844 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.844 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.844 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.844 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.844 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.844 [56/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.844 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.102 [58/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.102 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.102 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.102 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.102 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.102 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.102 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.102 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.102 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.102 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.102 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.359 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.359 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.359 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.359 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.359 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.359 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.360 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.360 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:41.360 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.360 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.617 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.617 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.617 [81/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.617 [82/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.617 [83/705] Linking static target lib/librte_ring.a 00:02:41.875 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.875 [85/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.875 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.875 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.875 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.875 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.134 [90/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.134 [91/705] Linking static target lib/librte_eal.a 00:02:42.134 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.134 [93/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.134 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.134 [95/705] Linking static target lib/librte_mempool.a 00:02:42.134 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.134 [97/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.134 [98/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.134 [99/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.394 [100/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.394 [101/705] Linking static target lib/librte_rcu.a 00:02:42.394 [102/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.394 [103/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.394 [104/705] Linking static target lib/librte_meter.a 00:02:42.394 [105/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.394 [106/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.394 [107/705] Linking static target lib/librte_mbuf.a 00:02:42.394 [108/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.654 [109/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:42.654 [110/705] Linking static target lib/librte_net.a 00:02:42.654 [111/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.654 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.654 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.654 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.654 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.654 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.912 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.912 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.170 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.430 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.430 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.430 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.430 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.430 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.430 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.430 [126/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.430 [127/705] Linking static target lib/librte_pci.a 00:02:43.430 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.688 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.688 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.689 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.689 [132/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.689 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.689 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.689 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.689 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.689 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.689 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.689 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.948 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.948 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.948 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.948 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.948 [144/705] Linking static target lib/librte_cmdline.a 00:02:43.948 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.948 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:43.948 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:43.948 [148/705] Linking static target lib/librte_metrics.a 00:02:44.207 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.207 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.465 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.465 [152/705] Linking static target lib/librte_timer.a 00:02:44.465 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.465 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.465 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:44.722 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.722 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.722 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:44.722 [159/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:44.980 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.980 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.980 [162/705] Linking static target lib/librte_bitratestats.a 00:02:45.238 [163/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.238 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.238 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.238 [166/705] Linking static target lib/librte_bbdev.a 00:02:45.238 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:45.496 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:45.496 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:45.753 [170/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.753 [171/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.753 [172/705] Linking static target lib/librte_hash.a 00:02:45.753 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.753 [174/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:45.753 [175/705] Linking static target lib/acl/libavx2_tmp.a 00:02:45.753 [176/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.753 [177/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.753 [178/705] Linking static target lib/librte_ethdev.a 00:02:46.011 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.011 [180/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:46.011 [181/705] Linking static target lib/librte_cfgfile.a 00:02:46.011 [182/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:46.011 [183/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.011 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:46.268 [185/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.268 [186/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.268 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:46.268 [188/705] Linking target lib/librte_eal.so.24.0 00:02:46.268 [189/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.268 [190/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.268 [191/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:46.525 [192/705] Linking target lib/librte_ring.so.24.0 00:02:46.525 [193/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:46.525 [194/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.525 [195/705] Linking target lib/librte_meter.so.24.0 00:02:46.525 [196/705] Linking target lib/librte_pci.so.24.0 00:02:46.525 [197/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:46.525 [198/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.525 [199/705] Linking target lib/librte_rcu.so.24.0 00:02:46.525 [200/705] Linking target lib/librte_mempool.so.24.0 00:02:46.525 [201/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:46.525 [202/705] Linking target lib/librte_timer.so.24.0 00:02:46.525 [203/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:46.525 [204/705] Linking static target lib/librte_bpf.a 00:02:46.525 [205/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:46.525 [206/705] Linking static target lib/librte_acl.a 00:02:46.525 [207/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:46.525 [208/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:46.525 [209/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:46.525 [210/705] Linking target lib/librte_cfgfile.so.24.0 00:02:46.525 [211/705] Linking static target lib/librte_compressdev.a 00:02:46.525 [212/705] Linking target lib/librte_mbuf.so.24.0 00:02:46.828 [213/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.828 [214/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.828 [215/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:46.828 [216/705] Linking target lib/librte_net.so.24.0 00:02:46.828 [217/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.828 [218/705] Linking target lib/librte_bbdev.so.24.0 00:02:46.828 [219/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.828 [220/705] Linking target lib/librte_acl.so.24.0 00:02:46.828 [221/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:46.828 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.828 [223/705] Linking target lib/librte_cmdline.so.24.0 00:02:46.828 [224/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.828 [225/705] Linking target lib/librte_hash.so.24.0 00:02:46.828 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.828 [227/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:47.096 [228/705] Linking static target lib/librte_distributor.a 00:02:47.096 [229/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.096 [230/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:47.096 [231/705] Linking target lib/librte_compressdev.so.24.0 00:02:47.097 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.097 [233/705] Linking target lib/librte_distributor.so.24.0 00:02:47.097 [234/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.354 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.354 [236/705] Linking static target lib/librte_dmadev.a 00:02:47.354 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.612 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.612 [239/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:47.612 [240/705] Linking target lib/librte_dmadev.so.24.0 00:02:47.612 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:47.612 [242/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:47.612 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.612 [244/705] Linking static target lib/librte_efd.a 00:02:47.870 [245/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.870 [246/705] Linking target lib/librte_efd.so.24.0 00:02:48.128 [247/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.128 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:48.128 [249/705] Linking static target lib/librte_dispatcher.a 00:02:48.128 [250/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.128 [251/705] Linking static target lib/librte_cryptodev.a 00:02:48.128 [252/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.128 [253/705] Linking static target lib/librte_gpudev.a 00:02:48.128 [254/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.386 [255/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:48.386 [256/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:48.386 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.386 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:48.644 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:48.644 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:48.644 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:48.902 [262/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.902 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:48.902 [264/705] Linking target lib/librte_gpudev.so.24.0 00:02:48.902 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:48.902 [266/705] Linking static target lib/librte_gro.a 00:02:48.902 [267/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:48.902 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:48.902 [269/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:48.902 [270/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:48.902 [271/705] Linking static target lib/librte_eventdev.a 00:02:48.902 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:48.902 [273/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.902 [274/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.159 [275/705] Linking target lib/librte_cryptodev.so.24.0 00:02:49.159 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.159 [277/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:49.159 [278/705] Linking static target lib/librte_gso.a 00:02:49.159 [279/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:49.159 [280/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.159 [281/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.159 [282/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:49.418 [283/705] Linking target lib/librte_ethdev.so.24.0 00:02:49.418 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.418 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:49.418 [286/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:49.418 [287/705] Linking target lib/librte_metrics.so.24.0 00:02:49.418 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:49.418 [289/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:49.418 [290/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:49.418 [291/705] Linking target lib/librte_bpf.so.24.0 00:02:49.418 [292/705] Linking target lib/librte_gro.so.24.0 00:02:49.418 [293/705] Linking static target lib/librte_jobstats.a 00:02:49.418 [294/705] Linking target lib/librte_gso.so.24.0 00:02:49.418 [295/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:49.418 [296/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:49.418 [297/705] Linking static target lib/librte_ip_frag.a 00:02:49.418 [298/705] Linking target lib/librte_bitratestats.so.24.0 00:02:49.418 [299/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:49.676 [300/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:49.676 [301/705] Linking static target lib/librte_latencystats.a 00:02:49.676 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.676 [303/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.676 [304/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:49.676 [305/705] Linking target lib/librte_jobstats.so.24.0 00:02:49.676 [306/705] Linking target lib/librte_ip_frag.so.24.0 00:02:49.934 [307/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:49.934 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.934 [309/705] Linking target lib/librte_latencystats.so.24.0 00:02:49.934 [310/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:49.934 [311/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.934 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.934 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:49.934 [314/705] Linking static target lib/librte_lpm.a 00:02:50.192 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:50.192 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.192 [317/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:50.192 [318/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:50.192 [319/705] Linking static target lib/librte_pcapng.a 00:02:50.192 [320/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.192 [321/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:50.192 [322/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.192 [323/705] Linking target lib/librte_lpm.so.24.0 00:02:50.450 [324/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.450 [325/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.450 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:50.450 [327/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:50.450 [328/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.450 [329/705] Linking target lib/librte_eventdev.so.24.0 00:02:50.450 [330/705] Linking target lib/librte_pcapng.so.24.0 00:02:50.450 [331/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:50.450 [332/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:50.450 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:50.450 [334/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:50.450 [335/705] Linking target lib/librte_dispatcher.so.24.0 00:02:50.709 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.709 [337/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.709 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.709 [339/705] Linking static target lib/librte_power.a 00:02:50.709 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:50.709 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:50.709 [342/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:50.709 [343/705] Linking static target lib/librte_rawdev.a 00:02:50.966 [344/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:50.966 [345/705] Linking static target lib/librte_regexdev.a 00:02:50.966 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:50.966 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:50.966 [348/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:50.966 [349/705] Linking static target lib/librte_member.a 00:02:50.966 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:50.966 [351/705] Linking static target lib/librte_mldev.a 00:02:51.224 [352/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:51.224 [353/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.224 [354/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.224 [355/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.224 [356/705] Linking target lib/librte_member.so.24.0 00:02:51.224 [357/705] Linking target lib/librte_rawdev.so.24.0 00:02:51.224 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:51.224 [359/705] Linking target lib/librte_power.so.24.0 00:02:51.224 [360/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.224 [361/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:51.224 [362/705] Linking static target lib/librte_reorder.a 00:02:51.482 [363/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:51.482 [364/705] Linking static target lib/librte_rib.a 00:02:51.482 [365/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.482 [366/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:51.482 [367/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:51.482 [368/705] Linking target lib/librte_regexdev.so.24.0 00:02:51.482 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:51.482 [370/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.482 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:51.482 [372/705] Linking static target lib/librte_stack.a 00:02:51.482 [373/705] Linking target lib/librte_reorder.so.24.0 00:02:51.482 [374/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.482 [375/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:51.740 [376/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [377/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.740 [378/705] Linking target lib/librte_stack.so.24.0 00:02:51.740 [379/705] Linking static target lib/librte_security.a 00:02:51.740 [380/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [381/705] Linking target lib/librte_rib.so.24.0 00:02:51.740 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.740 [383/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:51.740 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.740 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.740 [386/705] Linking target lib/librte_mldev.so.24.0 00:02:51.998 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.998 [388/705] Linking target lib/librte_security.so.24.0 00:02:51.998 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.998 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:52.255 [391/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.255 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.513 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.513 [394/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.513 [395/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.513 [396/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:52.513 [397/705] Linking static target lib/librte_sched.a 00:02:52.513 [398/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.770 [399/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.770 [400/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:52.770 [401/705] Linking target lib/librte_sched.so.24.0 00:02:52.770 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.770 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:52.770 [404/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:53.028 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:53.028 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:53.028 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:53.028 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:53.028 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:53.028 [410/705] Linking static target lib/librte_ipsec.a 00:02:53.028 [411/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.286 [412/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.286 [413/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:53.286 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.286 [415/705] Linking target lib/librte_ipsec.so.24.0 00:02:53.286 [416/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:53.544 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:53.544 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:53.544 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:53.802 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:53.802 [421/705] Linking static target lib/librte_fib.a 00:02:53.802 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:53.802 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:53.802 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:53.802 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:53.802 [426/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.060 [427/705] Linking target lib/librte_fib.so.24.0 00:02:54.060 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.060 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:54.060 [430/705] Linking static target lib/librte_pdcp.a 00:02:54.318 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.318 [432/705] Linking target lib/librte_pdcp.so.24.0 00:02:54.318 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.318 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.318 [435/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:54.318 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.318 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.576 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.576 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:54.834 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.834 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.834 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.834 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.834 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:54.834 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:54.834 [446/705] Linking static target lib/librte_port.a 00:02:55.092 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.092 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.092 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.349 [450/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.350 [451/705] Linking target lib/librte_port.so.24.0 00:02:55.350 [452/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.350 [453/705] Linking static target lib/librte_pdump.a 00:02:55.350 [454/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:55.350 [455/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.350 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.350 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.607 [458/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.607 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.607 [460/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.607 [461/705] Linking target lib/librte_pdump.so.24.0 00:02:55.607 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.607 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.865 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.865 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.865 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.865 [467/705] Linking static target lib/librte_table.a 00:02:55.865 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.124 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.124 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.124 [471/705] Linking target lib/librte_table.so.24.0 00:02:56.124 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.382 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.382 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:56.382 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:56.382 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.640 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:56.640 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.640 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:56.640 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.640 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:56.898 [482/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:56.898 [483/705] Linking static target lib/librte_graph.a 00:02:56.898 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.898 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:57.156 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:57.156 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:57.156 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:57.414 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:57.414 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.414 [491/705] Linking target lib/librte_graph.so.24.0 00:02:57.414 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.414 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:57.414 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:57.672 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:57.672 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:57.672 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:57.672 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.930 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.930 [500/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:57.930 [501/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.930 [502/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:57.930 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:58.187 [504/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:58.187 [505/705] Linking static target lib/librte_node.a 00:02:58.187 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.187 [507/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.187 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.187 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.187 [510/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.187 [511/705] Linking target lib/librte_node.so.24.0 00:02:58.446 [512/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.446 [513/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.446 [514/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.446 [515/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.446 [516/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.446 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.446 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.446 [519/705] Linking static target drivers/librte_bus_vdev.a 00:02:58.709 [520/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.709 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.709 [522/705] Linking static target drivers/librte_bus_pci.a 00:02:58.709 [523/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.709 [524/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.709 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.709 [526/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:58.709 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:58.709 [528/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:58.709 [529/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:58.709 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:58.968 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.968 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.968 [533/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.968 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:58.968 [535/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.968 [536/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.968 [537/705] Linking static target drivers/librte_mempool_ring.a 00:02:58.968 [538/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.968 [539/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.968 [540/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:59.226 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:59.485 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:59.485 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:59.485 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:59.485 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:00.051 [546/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:00.051 [547/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:00.051 [548/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:00.051 [549/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.310 [550/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:00.310 [551/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:00.568 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:00.568 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:00.568 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:00.568 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.568 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:00.568 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:00.827 [558/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:00.827 [559/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:01.087 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:01.087 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:01.345 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:01.345 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:01.345 [564/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:01.345 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:01.630 [566/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:01.630 [567/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:01.630 [568/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:01.630 [569/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:01.630 [570/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:01.630 [571/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:01.905 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:01.906 [573/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:01.906 [574/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:01.906 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:01.906 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.906 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:02.165 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:02.165 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:02.165 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:02.165 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:02.424 [582/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:02.424 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:02.424 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.424 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:02.424 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:02.424 [587/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.682 [588/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:02.682 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:02.682 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:02.941 [591/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.941 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:02.941 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:02.941 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:02.941 [595/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:03.199 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.199 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.199 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:03.457 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:03.457 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:03.457 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:03.457 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:03.457 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:03.458 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:03.716 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:03.716 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:03.716 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:03.716 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:03.716 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:03.974 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:03.974 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:03.974 [612/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.974 [613/705] Linking static target lib/librte_vhost.a 00:03:03.974 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:04.232 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:04.232 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:04.800 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:04.800 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:04.800 [619/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.800 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:04.800 [621/705] Linking target lib/librte_vhost.so.24.0 00:03:04.800 [622/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:04.800 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:05.059 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:05.059 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:05.059 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:05.059 [627/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:05.059 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:05.059 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:05.317 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:05.317 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:05.317 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:05.317 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:05.317 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:05.575 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:05.575 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:05.575 [637/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:05.575 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:05.575 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:05.575 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:05.842 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:05.842 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:05.842 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:05.842 [644/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:05.842 [645/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.104 [646/705] Linking static target lib/librte_pipeline.a 00:03:06.104 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.104 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.104 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.104 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.361 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.361 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.361 [653/705] Linking target app/dpdk-pdump 00:03:06.361 [654/705] Linking target app/dpdk-graph 00:03:06.361 [655/705] Linking target app/dpdk-dumpcap 00:03:06.361 [656/705] Linking target app/dpdk-proc-info 00:03:06.361 [657/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:06.619 [658/705] Linking target app/dpdk-test-acl 00:03:06.619 [659/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:06.619 [660/705] Linking target app/dpdk-test-cmdline 00:03:06.619 [661/705] Linking target app/dpdk-test-compress-perf 00:03:06.619 [662/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:06.619 [663/705] Linking target app/dpdk-test-crypto-perf 00:03:06.619 [664/705] Linking target app/dpdk-test-dma-perf 00:03:06.877 [665/705] Linking target app/dpdk-test-eventdev 00:03:06.877 [666/705] Linking target app/dpdk-test-gpudev 00:03:06.877 [667/705] Linking target app/dpdk-test-flow-perf 00:03:06.877 [668/705] Linking target app/dpdk-test-fib 00:03:06.877 [669/705] Linking target app/dpdk-test-mldev 00:03:06.877 [670/705] Linking target app/dpdk-test-pipeline 00:03:06.877 [671/705] Linking target app/dpdk-test-bbdev 00:03:07.134 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:07.134 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:07.393 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.393 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.393 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:07.651 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.651 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.651 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.651 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.909 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.909 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.909 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:07.909 [684/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.168 [685/705] Linking target lib/librte_pipeline.so.24.0 00:03:08.168 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:08.168 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.168 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.425 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:08.425 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.425 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:08.682 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:08.682 [693/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:08.939 [694/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:08.939 [695/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:08.939 [696/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:08.939 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.939 [698/705] Linking target app/dpdk-test-regex 00:03:09.196 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.196 [700/705] Linking target app/dpdk-test-sad 00:03:09.196 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:09.196 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:09.454 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.712 [704/705] Linking target app/dpdk-test-security-perf 00:03:09.712 [705/705] Linking target app/dpdk-testpmd 00:03:09.712 14:49:33 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:09.712 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.712 [0/1] Installing files. 00:03:09.972 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.235 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.236 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.236 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.236 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.237 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.497 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.497 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.497 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.497 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.497 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.497 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.498 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.499 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.500 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.500 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:10.500 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:10.500 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:10.500 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:10.500 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:10.500 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:10.500 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:10.500 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:10.500 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:10.500 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:10.500 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:10.500 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:10.500 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:10.500 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:10.500 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:10.500 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:10.500 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:10.500 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:10.500 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:10.500 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:10.500 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:10.500 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:10.500 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:10.500 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:10.500 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:10.500 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:10.500 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:10.500 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:10.501 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:10.501 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:10.501 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:10.501 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:10.501 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:10.501 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:10.501 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:10.501 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:10.501 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:10.501 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:10.501 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:10.501 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:10.501 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:10.501 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:10.501 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:10.501 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:10.501 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:10.501 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:10.501 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:10.501 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:10.501 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:10.501 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:10.501 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:10.501 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:10.501 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:10.501 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:10.501 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:10.501 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:10.501 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:10.501 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:10.501 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:10.501 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:10.501 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:10.501 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:10.501 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:10.501 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:10.501 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:10.501 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:10.501 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:10.501 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:10.501 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:10.501 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:10.501 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:10.501 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:10.501 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:10.501 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:10.501 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:10.501 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:10.501 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:10.501 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:10.501 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:10.501 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:10.501 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:10.501 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:10.501 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:10.501 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:10.501 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:10.501 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:10.501 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:10.501 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:10.501 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:10.501 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:10.501 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:10.501 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:10.501 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:10.501 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:10.501 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:10.501 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:10.501 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:10.501 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:10.501 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:10.501 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:10.501 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:10.501 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:10.501 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:10.501 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:10.501 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:10.501 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:10.501 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:10.501 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:10.501 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:10.501 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:10.501 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:10.501 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:10.501 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:10.501 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:10.501 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:10.501 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:10.501 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:10.501 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:10.501 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:10.501 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:10.501 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:10.501 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:10.501 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:10.501 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:10.501 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:10.501 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:10.501 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:10.501 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:10.501 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:10.501 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:10.502 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:10.502 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:10.502 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:10.502 14:49:33 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:10.502 14:49:34 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.502 ************************************ 00:03:10.502 END TEST build_native_dpdk 00:03:10.502 ************************************ 00:03:10.502 14:49:34 -- common/autobuild_common.sh@203 -- $ cat 00:03:10.502 14:49:34 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:10.502 00:03:10.502 real 0m37.028s 00:03:10.502 user 4m17.829s 00:03:10.502 sys 0m38.953s 00:03:10.502 14:49:34 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:10.502 14:49:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.502 14:49:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:10.502 14:49:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:10.502 14:49:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:10.759 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.759 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.759 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:10.759 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:11.017 Using 'verbs' RDMA provider 00:03:21.919 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:31.888 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:31.888 Creating mk/config.mk...done. 00:03:31.888 Creating mk/cc.flags.mk...done. 00:03:31.888 Type 'make' to build. 00:03:31.888 14:49:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:31.888 14:49:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:31.888 14:49:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:31.888 14:49:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.888 ************************************ 00:03:31.888 START TEST make 00:03:31.888 ************************************ 00:03:31.888 14:49:55 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:31.888 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:31.888 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:31.888 meson setup builddir \ 00:03:31.888 -Dwith-libaio=enabled \ 00:03:31.888 -Dwith-liburing=enabled \ 00:03:31.888 -Dwith-libvfn=disabled \ 00:03:31.888 -Dwith-spdk=false && \ 00:03:31.888 meson compile -C builddir && \ 00:03:31.888 cd -) 00:03:32.146 make[1]: Nothing to be done for 'all'. 00:03:34.059 The Meson build system 00:03:34.059 Version: 1.5.0 00:03:34.059 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:34.059 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:34.059 Build type: native build 00:03:34.059 Project name: xnvme 00:03:34.059 Project version: 0.7.3 00:03:34.059 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.059 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:34.059 Host machine cpu family: x86_64 00:03:34.059 Host machine cpu: x86_64 00:03:34.059 Message: host_machine.system: linux 00:03:34.059 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:34.059 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:34.059 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:34.059 Run-time dependency threads found: YES 00:03:34.059 Has header "setupapi.h" : NO 00:03:34.059 Has header "linux/blkzoned.h" : YES 00:03:34.059 Has header "linux/blkzoned.h" : YES (cached) 00:03:34.059 Has header "libaio.h" : YES 00:03:34.059 Library aio found: YES 00:03:34.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.059 Run-time dependency liburing found: YES 2.2 00:03:34.059 Dependency libvfn skipped: feature with-libvfn disabled 00:03:34.059 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.059 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.059 Configuring xnvme_config.h using configuration 00:03:34.059 Configuring xnvme.spec using configuration 00:03:34.059 Run-time dependency bash-completion found: YES 2.11 00:03:34.059 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:34.059 Program cp found: YES (/usr/bin/cp) 00:03:34.059 Has header "winsock2.h" : NO 00:03:34.059 Has header "dbghelp.h" : NO 00:03:34.059 Library rpcrt4 found: NO 00:03:34.059 Library rt found: YES 00:03:34.059 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:34.059 Found CMake: /usr/bin/cmake (3.27.7) 00:03:34.059 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:34.059 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:34.059 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:34.059 Build targets in project: 32 00:03:34.059 00:03:34.059 xnvme 0.7.3 00:03:34.059 00:03:34.059 User defined options 00:03:34.059 with-libaio : enabled 00:03:34.059 with-liburing: enabled 00:03:34.059 with-libvfn : disabled 00:03:34.059 with-spdk : false 00:03:34.059 00:03:34.059 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.625 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:34.625 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:34.626 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:34.626 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:34.626 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:34.626 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:34.626 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.626 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:34.626 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.626 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:34.626 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:34.626 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.626 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:34.626 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:34.626 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:34.626 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:34.626 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:34.626 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:34.626 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:34.884 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:34.884 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:34.884 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:34.884 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:34.884 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:34.884 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:34.884 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:34.884 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:34.884 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:34.884 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:34.884 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:34.884 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:34.884 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:34.884 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:34.884 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:34.884 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:34.884 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:34.884 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:34.884 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:34.884 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:34.884 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:34.884 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:34.884 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:34.884 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:34.884 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:34.884 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:34.884 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:34.884 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:34.884 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.884 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:34.884 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:34.884 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:34.884 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:34.884 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:34.884 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:34.884 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:34.884 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:34.884 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:35.142 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:35.142 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:35.142 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:35.142 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:35.142 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:35.142 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:35.142 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:35.142 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:35.142 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:35.142 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:35.142 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:35.142 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:35.142 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:35.142 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:35.142 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:35.142 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:35.142 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:35.142 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:35.142 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:35.142 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:35.142 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:35.401 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:35.401 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:35.401 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:35.401 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:35.401 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:35.401 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:35.401 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:35.401 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:35.401 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:35.401 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:35.401 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:35.401 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:35.401 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:35.401 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:35.401 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:35.401 [93/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:35.401 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:35.401 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:35.401 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:35.401 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:35.401 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:35.401 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:35.401 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:35.401 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:35.401 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:35.658 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:35.658 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:35.658 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:35.658 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:35.658 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:35.658 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:35.658 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:35.658 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:35.658 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:35.658 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:35.658 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:35.658 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:35.658 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.658 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:35.658 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:35.658 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:35.658 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:35.658 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:35.658 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:35.658 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:35.658 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:35.658 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:35.658 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:35.658 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:35.658 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:35.658 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:35.658 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:35.658 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:35.658 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:35.658 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:35.658 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:35.658 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:35.658 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:35.915 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:35.915 [137/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:35.915 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:35.915 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:35.915 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:35.915 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:35.915 [142/203] Linking target lib/libxnvme.so 00:03:35.915 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:35.915 [144/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:35.915 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:35.915 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:35.915 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:35.915 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:35.915 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:35.915 [150/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:35.915 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:35.915 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:35.915 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:35.915 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:35.915 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:36.172 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:36.172 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:36.172 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:36.172 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:36.172 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:36.172 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:36.172 [162/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:36.172 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:36.172 [164/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:36.172 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:36.172 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:36.172 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:36.172 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:36.172 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:36.172 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:36.172 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:36.430 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:36.430 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:36.430 [174/203] Linking static target lib/libxnvme.a 00:03:36.430 [175/203] Linking target tests/xnvme_tests_enum 00:03:36.430 [176/203] Linking target tests/xnvme_tests_buf 00:03:36.430 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:03:36.430 [178/203] Linking target tests/xnvme_tests_lblk 00:03:36.430 [179/203] Linking target tests/xnvme_tests_ioworker 00:03:36.430 [180/203] Linking target tests/xnvme_tests_scc 00:03:36.430 [181/203] Linking target tests/xnvme_tests_znd_state 00:03:36.430 [182/203] Linking target tests/xnvme_tests_async_intf 00:03:36.430 [183/203] Linking target tests/xnvme_tests_cli 00:03:36.430 [184/203] Linking target tests/xnvme_tests_znd_append 00:03:36.430 [185/203] Linking target tests/xnvme_tests_kvs 00:03:36.430 [186/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:36.430 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:36.430 [188/203] Linking target tests/xnvme_tests_map 00:03:36.430 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:36.430 [190/203] Linking target tools/xnvme 00:03:36.430 [191/203] Linking target examples/xnvme_hello 00:03:36.430 [192/203] Linking target examples/xnvme_enum 00:03:36.430 [193/203] Linking target examples/xnvme_dev 00:03:36.430 [194/203] Linking target tools/xnvme_file 00:03:36.430 [195/203] Linking target tools/xdd 00:03:36.430 [196/203] Linking target tools/lblk 00:03:36.430 [197/203] Linking target examples/xnvme_io_async 00:03:36.430 [198/203] Linking target tools/zoned 00:03:36.430 [199/203] Linking target tools/kvs 00:03:36.430 [200/203] Linking target examples/zoned_io_async 00:03:36.430 [201/203] Linking target examples/xnvme_single_async 00:03:36.430 [202/203] Linking target examples/xnvme_single_sync 00:03:36.430 [203/203] Linking target examples/zoned_io_sync 00:03:36.430 INFO: autodetecting backend as ninja 00:03:36.430 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:36.688 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:48.884 CC lib/ut_mock/mock.o 00:03:48.884 CC lib/ut/ut.o 00:03:48.884 CC lib/log/log_flags.o 00:03:48.884 CC lib/log/log_deprecated.o 00:03:48.884 CC lib/log/log.o 00:03:48.884 LIB libspdk_ut_mock.a 00:03:48.884 LIB libspdk_ut.a 00:03:48.884 SO libspdk_ut_mock.so.5.0 00:03:48.884 LIB libspdk_log.a 00:03:48.884 SO libspdk_ut.so.1.0 00:03:48.884 SO libspdk_log.so.6.1 00:03:48.884 SYMLINK libspdk_ut_mock.so 00:03:48.884 SYMLINK libspdk_ut.so 00:03:48.884 SYMLINK libspdk_log.so 00:03:49.142 CC lib/util/base64.o 00:03:49.142 CC lib/dma/dma.o 00:03:49.142 CC lib/util/bit_array.o 00:03:49.142 CC lib/util/cpuset.o 00:03:49.142 CC lib/ioat/ioat.o 00:03:49.142 CC lib/util/crc16.o 00:03:49.142 CC lib/util/crc32c.o 00:03:49.142 CC lib/util/crc32.o 00:03:49.142 CXX lib/trace_parser/trace.o 00:03:49.142 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.142 CC lib/util/crc32_ieee.o 00:03:49.142 CC lib/vfio_user/host/vfio_user.o 00:03:49.142 CC lib/util/crc64.o 00:03:49.142 LIB libspdk_dma.a 00:03:49.142 CC lib/util/dif.o 00:03:49.142 SO libspdk_dma.so.3.0 00:03:49.142 CC lib/util/fd.o 00:03:49.142 CC lib/util/file.o 00:03:49.142 CC lib/util/hexlify.o 00:03:49.142 SYMLINK libspdk_dma.so 00:03:49.142 CC lib/util/iov.o 00:03:49.142 CC lib/util/math.o 00:03:49.142 CC lib/util/pipe.o 00:03:49.142 LIB libspdk_ioat.a 00:03:49.400 SO libspdk_ioat.so.6.0 00:03:49.400 LIB libspdk_vfio_user.a 00:03:49.400 CC lib/util/strerror_tls.o 00:03:49.400 CC lib/util/string.o 00:03:49.400 SO libspdk_vfio_user.so.4.0 00:03:49.400 SYMLINK libspdk_ioat.so 00:03:49.400 CC lib/util/uuid.o 00:03:49.400 CC lib/util/fd_group.o 00:03:49.400 SYMLINK libspdk_vfio_user.so 00:03:49.400 CC lib/util/xor.o 00:03:49.400 CC lib/util/zipf.o 00:03:49.658 LIB libspdk_trace_parser.a 00:03:49.658 SO libspdk_trace_parser.so.4.0 00:03:49.916 LIB libspdk_util.a 00:03:49.916 SYMLINK libspdk_trace_parser.so 00:03:49.916 SO libspdk_util.so.8.0 00:03:49.916 SYMLINK libspdk_util.so 00:03:50.175 CC lib/rdma/common.o 00:03:50.175 CC lib/rdma/rdma_verbs.o 00:03:50.175 CC lib/idxd/idxd.o 00:03:50.175 CC lib/json/json_parse.o 00:03:50.175 CC lib/env_dpdk/env.o 00:03:50.175 CC lib/env_dpdk/memory.o 00:03:50.175 CC lib/idxd/idxd_user.o 00:03:50.175 CC lib/json/json_util.o 00:03:50.175 CC lib/conf/conf.o 00:03:50.175 CC lib/vmd/vmd.o 00:03:50.175 LIB libspdk_conf.a 00:03:50.175 SO libspdk_conf.so.5.0 00:03:50.175 CC lib/idxd/idxd_kernel.o 00:03:50.175 LIB libspdk_rdma.a 00:03:50.175 SYMLINK libspdk_conf.so 00:03:50.175 CC lib/vmd/led.o 00:03:50.175 CC lib/env_dpdk/pci.o 00:03:50.175 CC lib/json/json_write.o 00:03:50.175 CC lib/env_dpdk/init.o 00:03:50.175 SO libspdk_rdma.so.5.0 00:03:50.433 SYMLINK libspdk_rdma.so 00:03:50.433 CC lib/env_dpdk/threads.o 00:03:50.433 CC lib/env_dpdk/pci_ioat.o 00:03:50.433 CC lib/env_dpdk/pci_virtio.o 00:03:50.433 CC lib/env_dpdk/pci_vmd.o 00:03:50.433 CC lib/env_dpdk/pci_idxd.o 00:03:50.433 CC lib/env_dpdk/pci_event.o 00:03:50.433 CC lib/env_dpdk/sigbus_handler.o 00:03:50.433 CC lib/env_dpdk/pci_dpdk.o 00:03:50.433 LIB libspdk_json.a 00:03:50.691 SO libspdk_json.so.5.1 00:03:50.691 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:50.691 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:50.691 LIB libspdk_idxd.a 00:03:50.691 SYMLINK libspdk_json.so 00:03:50.691 SO libspdk_idxd.so.11.0 00:03:50.691 SYMLINK libspdk_idxd.so 00:03:50.691 LIB libspdk_vmd.a 00:03:50.691 SO libspdk_vmd.so.5.0 00:03:50.691 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.691 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.691 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.691 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.691 SYMLINK libspdk_vmd.so 00:03:50.949 LIB libspdk_jsonrpc.a 00:03:50.949 SO libspdk_jsonrpc.so.5.1 00:03:50.949 SYMLINK libspdk_jsonrpc.so 00:03:51.207 CC lib/rpc/rpc.o 00:03:51.466 LIB libspdk_rpc.a 00:03:51.466 LIB libspdk_env_dpdk.a 00:03:51.466 SO libspdk_rpc.so.5.0 00:03:51.466 SYMLINK libspdk_rpc.so 00:03:51.466 SO libspdk_env_dpdk.so.13.0 00:03:51.466 CC lib/notify/notify.o 00:03:51.466 CC lib/notify/notify_rpc.o 00:03:51.466 CC lib/sock/sock.o 00:03:51.466 CC lib/trace/trace_flags.o 00:03:51.466 CC lib/trace/trace.o 00:03:51.466 CC lib/trace/trace_rpc.o 00:03:51.466 CC lib/sock/sock_rpc.o 00:03:51.466 SYMLINK libspdk_env_dpdk.so 00:03:51.725 LIB libspdk_notify.a 00:03:51.725 SO libspdk_notify.so.5.0 00:03:51.725 SYMLINK libspdk_notify.so 00:03:51.725 LIB libspdk_trace.a 00:03:51.725 SO libspdk_trace.so.9.0 00:03:51.986 SYMLINK libspdk_trace.so 00:03:51.986 LIB libspdk_sock.a 00:03:51.986 SO libspdk_sock.so.8.0 00:03:51.986 SYMLINK libspdk_sock.so 00:03:51.986 CC lib/thread/iobuf.o 00:03:51.986 CC lib/thread/thread.o 00:03:52.245 CC lib/nvme/nvme_ctrlr.o 00:03:52.245 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:52.245 CC lib/nvme/nvme_ns.o 00:03:52.245 CC lib/nvme/nvme_fabric.o 00:03:52.245 CC lib/nvme/nvme_pcie.o 00:03:52.245 CC lib/nvme/nvme_qpair.o 00:03:52.245 CC lib/nvme/nvme_ns_cmd.o 00:03:52.245 CC lib/nvme/nvme_pcie_common.o 00:03:52.245 CC lib/nvme/nvme.o 00:03:52.812 CC lib/nvme/nvme_quirks.o 00:03:52.812 CC lib/nvme/nvme_transport.o 00:03:52.812 CC lib/nvme/nvme_discovery.o 00:03:52.812 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:53.069 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.069 CC lib/nvme/nvme_tcp.o 00:03:53.069 CC lib/nvme/nvme_opal.o 00:03:53.069 CC lib/nvme/nvme_io_msg.o 00:03:53.069 CC lib/nvme/nvme_poll_group.o 00:03:53.327 CC lib/nvme/nvme_zns.o 00:03:53.327 CC lib/nvme/nvme_cuse.o 00:03:53.327 CC lib/nvme/nvme_vfio_user.o 00:03:53.327 CC lib/nvme/nvme_rdma.o 00:03:53.585 LIB libspdk_thread.a 00:03:53.585 SO libspdk_thread.so.9.0 00:03:53.585 SYMLINK libspdk_thread.so 00:03:53.585 CC lib/blob/blobstore.o 00:03:53.844 CC lib/init/json_config.o 00:03:53.844 CC lib/accel/accel.o 00:03:53.844 CC lib/virtio/virtio.o 00:03:53.844 CC lib/init/subsystem.o 00:03:53.844 CC lib/init/subsystem_rpc.o 00:03:53.844 CC lib/init/rpc.o 00:03:53.844 CC lib/blob/request.o 00:03:53.844 CC lib/virtio/virtio_vhost_user.o 00:03:54.102 CC lib/blob/zeroes.o 00:03:54.102 LIB libspdk_init.a 00:03:54.102 SO libspdk_init.so.4.0 00:03:54.102 CC lib/blob/blob_bs_dev.o 00:03:54.102 SYMLINK libspdk_init.so 00:03:54.102 CC lib/virtio/virtio_vfio_user.o 00:03:54.102 CC lib/accel/accel_rpc.o 00:03:54.102 CC lib/virtio/virtio_pci.o 00:03:54.102 CC lib/accel/accel_sw.o 00:03:54.361 CC lib/event/app.o 00:03:54.361 CC lib/event/reactor.o 00:03:54.361 CC lib/event/app_rpc.o 00:03:54.361 CC lib/event/log_rpc.o 00:03:54.361 LIB libspdk_virtio.a 00:03:54.361 SO libspdk_virtio.so.6.0 00:03:54.361 CC lib/event/scheduler_static.o 00:03:54.621 SYMLINK libspdk_virtio.so 00:03:54.621 LIB libspdk_nvme.a 00:03:54.621 LIB libspdk_accel.a 00:03:54.881 SO libspdk_accel.so.14.0 00:03:54.881 LIB libspdk_event.a 00:03:54.881 SO libspdk_event.so.12.0 00:03:54.881 SYMLINK libspdk_accel.so 00:03:54.881 SO libspdk_nvme.so.12.0 00:03:54.881 SYMLINK libspdk_event.so 00:03:55.141 CC lib/bdev/bdev_rpc.o 00:03:55.141 CC lib/bdev/bdev.o 00:03:55.141 CC lib/bdev/bdev_zone.o 00:03:55.141 CC lib/bdev/part.o 00:03:55.141 CC lib/bdev/scsi_nvme.o 00:03:55.141 SYMLINK libspdk_nvme.so 00:03:56.573 LIB libspdk_blob.a 00:03:56.573 SO libspdk_blob.so.10.1 00:03:56.573 SYMLINK libspdk_blob.so 00:03:56.834 CC lib/lvol/lvol.o 00:03:56.834 CC lib/blobfs/blobfs.o 00:03:56.834 CC lib/blobfs/tree.o 00:03:57.777 LIB libspdk_blobfs.a 00:03:57.777 SO libspdk_blobfs.so.9.0 00:03:57.777 LIB libspdk_lvol.a 00:03:57.777 SYMLINK libspdk_blobfs.so 00:03:57.777 SO libspdk_lvol.so.9.1 00:03:57.777 LIB libspdk_bdev.a 00:03:57.777 SYMLINK libspdk_lvol.so 00:03:58.038 SO libspdk_bdev.so.14.0 00:03:58.038 SYMLINK libspdk_bdev.so 00:03:58.299 CC lib/nbd/nbd.o 00:03:58.299 CC lib/nbd/nbd_rpc.o 00:03:58.299 CC lib/ublk/ublk.o 00:03:58.299 CC lib/nvmf/ctrlr.o 00:03:58.299 CC lib/ublk/ublk_rpc.o 00:03:58.299 CC lib/nvmf/ctrlr_discovery.o 00:03:58.300 CC lib/scsi/dev.o 00:03:58.300 CC lib/nvmf/ctrlr_bdev.o 00:03:58.300 CC lib/ftl/ftl_core.o 00:03:58.300 CC lib/nvmf/subsystem.o 00:03:58.300 CC lib/nvmf/nvmf.o 00:03:58.300 CC lib/nvmf/nvmf_rpc.o 00:03:58.300 CC lib/scsi/lun.o 00:03:58.560 CC lib/ftl/ftl_init.o 00:03:58.560 LIB libspdk_nbd.a 00:03:58.560 SO libspdk_nbd.so.6.0 00:03:58.820 SYMLINK libspdk_nbd.so 00:03:58.820 CC lib/ftl/ftl_layout.o 00:03:58.820 CC lib/nvmf/transport.o 00:03:58.820 CC lib/scsi/port.o 00:03:58.820 CC lib/scsi/scsi.o 00:03:58.820 CC lib/scsi/scsi_bdev.o 00:03:58.820 LIB libspdk_ublk.a 00:03:58.820 SO libspdk_ublk.so.2.0 00:03:58.820 CC lib/scsi/scsi_pr.o 00:03:59.079 CC lib/scsi/scsi_rpc.o 00:03:59.079 SYMLINK libspdk_ublk.so 00:03:59.079 CC lib/scsi/task.o 00:03:59.079 CC lib/ftl/ftl_debug.o 00:03:59.079 CC lib/nvmf/tcp.o 00:03:59.338 CC lib/nvmf/rdma.o 00:03:59.338 CC lib/ftl/ftl_io.o 00:03:59.338 CC lib/ftl/ftl_sb.o 00:03:59.338 CC lib/ftl/ftl_l2p.o 00:03:59.338 CC lib/ftl/ftl_l2p_flat.o 00:03:59.338 LIB libspdk_scsi.a 00:03:59.338 CC lib/ftl/ftl_nv_cache.o 00:03:59.338 CC lib/ftl/ftl_band.o 00:03:59.338 CC lib/ftl/ftl_band_ops.o 00:03:59.338 CC lib/ftl/ftl_writer.o 00:03:59.338 SO libspdk_scsi.so.8.0 00:03:59.596 CC lib/ftl/ftl_rq.o 00:03:59.596 CC lib/ftl/ftl_reloc.o 00:03:59.596 SYMLINK libspdk_scsi.so 00:03:59.596 CC lib/iscsi/conn.o 00:03:59.596 CC lib/iscsi/init_grp.o 00:03:59.596 CC lib/iscsi/iscsi.o 00:03:59.596 CC lib/ftl/ftl_l2p_cache.o 00:03:59.854 CC lib/ftl/ftl_p2l.o 00:03:59.854 CC lib/ftl/mngt/ftl_mngt.o 00:03:59.854 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:00.112 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:00.112 CC lib/iscsi/md5.o 00:04:00.371 CC lib/vhost/vhost.o 00:04:00.371 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:00.371 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:00.371 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:00.371 CC lib/iscsi/param.o 00:04:00.371 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:00.371 CC lib/ftl/utils/ftl_conf.o 00:04:00.371 CC lib/vhost/vhost_rpc.o 00:04:00.371 CC lib/ftl/utils/ftl_md.o 00:04:00.630 CC lib/iscsi/portal_grp.o 00:04:00.630 CC lib/ftl/utils/ftl_mempool.o 00:04:00.630 CC lib/iscsi/tgt_node.o 00:04:00.630 CC lib/vhost/vhost_scsi.o 00:04:00.630 CC lib/vhost/vhost_blk.o 00:04:00.630 CC lib/vhost/rte_vhost_user.o 00:04:00.630 CC lib/ftl/utils/ftl_bitmap.o 00:04:00.630 CC lib/ftl/utils/ftl_property.o 00:04:00.889 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:00.889 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:00.889 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:00.889 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:00.889 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:00.889 CC lib/iscsi/iscsi_subsystem.o 00:04:01.148 CC lib/iscsi/iscsi_rpc.o 00:04:01.148 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:01.148 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:01.148 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:01.148 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:01.148 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:01.148 CC lib/ftl/base/ftl_base_dev.o 00:04:01.415 CC lib/ftl/base/ftl_base_bdev.o 00:04:01.415 CC lib/ftl/ftl_trace.o 00:04:01.415 CC lib/iscsi/task.o 00:04:01.415 LIB libspdk_nvmf.a 00:04:01.415 SO libspdk_nvmf.so.17.0 00:04:01.415 LIB libspdk_iscsi.a 00:04:01.415 LIB libspdk_ftl.a 00:04:01.674 SO libspdk_iscsi.so.7.0 00:04:01.674 SYMLINK libspdk_nvmf.so 00:04:01.674 LIB libspdk_vhost.a 00:04:01.674 SO libspdk_ftl.so.8.0 00:04:01.674 SO libspdk_vhost.so.7.1 00:04:01.674 SYMLINK libspdk_iscsi.so 00:04:01.674 SYMLINK libspdk_vhost.so 00:04:01.935 SYMLINK libspdk_ftl.so 00:04:02.196 CC module/env_dpdk/env_dpdk_rpc.o 00:04:02.196 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:02.196 CC module/blob/bdev/blob_bdev.o 00:04:02.196 CC module/scheduler/gscheduler/gscheduler.o 00:04:02.196 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:02.196 CC module/accel/ioat/accel_ioat.o 00:04:02.196 CC module/accel/error/accel_error.o 00:04:02.196 CC module/sock/posix/posix.o 00:04:02.196 CC module/accel/iaa/accel_iaa.o 00:04:02.196 CC module/accel/dsa/accel_dsa.o 00:04:02.196 LIB libspdk_env_dpdk_rpc.a 00:04:02.196 SO libspdk_env_dpdk_rpc.so.5.0 00:04:02.196 SYMLINK libspdk_env_dpdk_rpc.so 00:04:02.196 CC module/accel/iaa/accel_iaa_rpc.o 00:04:02.196 LIB libspdk_scheduler_gscheduler.a 00:04:02.196 LIB libspdk_scheduler_dpdk_governor.a 00:04:02.196 SO libspdk_scheduler_gscheduler.so.3.0 00:04:02.196 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:02.196 LIB libspdk_scheduler_dynamic.a 00:04:02.196 CC module/accel/ioat/accel_ioat_rpc.o 00:04:02.196 CC module/accel/error/accel_error_rpc.o 00:04:02.459 SO libspdk_scheduler_dynamic.so.3.0 00:04:02.459 CC module/accel/dsa/accel_dsa_rpc.o 00:04:02.459 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:02.459 SYMLINK libspdk_scheduler_gscheduler.so 00:04:02.459 SYMLINK libspdk_scheduler_dynamic.so 00:04:02.459 LIB libspdk_accel_iaa.a 00:04:02.459 LIB libspdk_blob_bdev.a 00:04:02.459 SO libspdk_accel_iaa.so.2.0 00:04:02.459 SO libspdk_blob_bdev.so.10.1 00:04:02.459 LIB libspdk_accel_ioat.a 00:04:02.459 LIB libspdk_accel_dsa.a 00:04:02.459 LIB libspdk_accel_error.a 00:04:02.459 SO libspdk_accel_ioat.so.5.0 00:04:02.459 SYMLINK libspdk_accel_iaa.so 00:04:02.459 SYMLINK libspdk_blob_bdev.so 00:04:02.459 SO libspdk_accel_error.so.1.0 00:04:02.459 SO libspdk_accel_dsa.so.4.0 00:04:02.459 SYMLINK libspdk_accel_ioat.so 00:04:02.459 SYMLINK libspdk_accel_error.so 00:04:02.459 SYMLINK libspdk_accel_dsa.so 00:04:02.721 CC module/bdev/lvol/vbdev_lvol.o 00:04:02.721 CC module/bdev/malloc/bdev_malloc.o 00:04:02.721 CC module/bdev/null/bdev_null.o 00:04:02.721 CC module/bdev/gpt/gpt.o 00:04:02.721 CC module/bdev/delay/vbdev_delay.o 00:04:02.721 CC module/blobfs/bdev/blobfs_bdev.o 00:04:02.721 CC module/bdev/nvme/bdev_nvme.o 00:04:02.721 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.721 CC module/bdev/error/vbdev_error.o 00:04:02.721 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:02.721 CC module/bdev/gpt/vbdev_gpt.o 00:04:02.982 CC module/bdev/null/bdev_null_rpc.o 00:04:02.982 LIB libspdk_sock_posix.a 00:04:02.982 SO libspdk_sock_posix.so.5.0 00:04:02.982 LIB libspdk_blobfs_bdev.a 00:04:02.982 SO libspdk_blobfs_bdev.so.5.0 00:04:02.982 CC module/bdev/error/vbdev_error_rpc.o 00:04:02.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.982 SYMLINK libspdk_blobfs_bdev.so 00:04:02.982 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:02.982 SYMLINK libspdk_sock_posix.so 00:04:02.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:02.982 CC module/bdev/nvme/nvme_rpc.o 00:04:02.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:02.982 LIB libspdk_bdev_null.a 00:04:02.982 SO libspdk_bdev_null.so.5.0 00:04:02.982 LIB libspdk_bdev_gpt.a 00:04:02.982 LIB libspdk_bdev_passthru.a 00:04:02.982 SO libspdk_bdev_gpt.so.5.0 00:04:02.982 LIB libspdk_bdev_error.a 00:04:02.982 SO libspdk_bdev_passthru.so.5.0 00:04:02.982 SO libspdk_bdev_error.so.5.0 00:04:02.982 SYMLINK libspdk_bdev_null.so 00:04:02.982 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:02.982 LIB libspdk_bdev_delay.a 00:04:02.982 CC module/bdev/nvme/bdev_mdns_client.o 00:04:03.243 SYMLINK libspdk_bdev_passthru.so 00:04:03.243 SYMLINK libspdk_bdev_gpt.so 00:04:03.243 CC module/bdev/nvme/vbdev_opal.o 00:04:03.243 LIB libspdk_bdev_malloc.a 00:04:03.243 SYMLINK libspdk_bdev_error.so 00:04:03.243 SO libspdk_bdev_delay.so.5.0 00:04:03.243 SO libspdk_bdev_malloc.so.5.0 00:04:03.243 SYMLINK libspdk_bdev_delay.so 00:04:03.243 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:03.243 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:03.243 CC module/bdev/raid/bdev_raid.o 00:04:03.243 SYMLINK libspdk_bdev_malloc.so 00:04:03.243 CC module/bdev/split/vbdev_split.o 00:04:03.243 CC module/bdev/split/vbdev_split_rpc.o 00:04:03.243 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:03.501 CC module/bdev/raid/bdev_raid_rpc.o 00:04:03.501 CC module/bdev/xnvme/bdev_xnvme.o 00:04:03.501 LIB libspdk_bdev_lvol.a 00:04:03.501 LIB libspdk_bdev_split.a 00:04:03.501 SO libspdk_bdev_lvol.so.5.0 00:04:03.501 CC module/bdev/aio/bdev_aio.o 00:04:03.501 SO libspdk_bdev_split.so.5.0 00:04:03.501 CC module/bdev/ftl/bdev_ftl.o 00:04:03.501 SYMLINK libspdk_bdev_split.so 00:04:03.501 SYMLINK libspdk_bdev_lvol.so 00:04:03.501 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.501 CC module/bdev/raid/bdev_raid_sb.o 00:04:03.501 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:03.501 CC module/bdev/raid/raid0.o 00:04:03.501 CC module/bdev/raid/raid1.o 00:04:03.501 CC module/bdev/raid/concat.o 00:04:03.501 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:03.760 LIB libspdk_bdev_zone_block.a 00:04:03.760 SO libspdk_bdev_zone_block.so.5.0 00:04:03.760 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:03.760 SYMLINK libspdk_bdev_zone_block.so 00:04:03.760 LIB libspdk_bdev_aio.a 00:04:03.760 LIB libspdk_bdev_xnvme.a 00:04:03.760 SO libspdk_bdev_aio.so.5.0 00:04:03.760 SO libspdk_bdev_xnvme.so.2.0 00:04:03.760 SYMLINK libspdk_bdev_aio.so 00:04:03.760 SYMLINK libspdk_bdev_xnvme.so 00:04:03.760 CC module/bdev/iscsi/bdev_iscsi.o 00:04:03.760 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.760 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.760 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.760 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:04.019 LIB libspdk_bdev_ftl.a 00:04:04.019 SO libspdk_bdev_ftl.so.5.0 00:04:04.019 SYMLINK libspdk_bdev_ftl.so 00:04:04.019 LIB libspdk_bdev_raid.a 00:04:04.019 SO libspdk_bdev_raid.so.5.0 00:04:04.280 LIB libspdk_bdev_iscsi.a 00:04:04.280 SO libspdk_bdev_iscsi.so.5.0 00:04:04.280 SYMLINK libspdk_bdev_raid.so 00:04:04.280 SYMLINK libspdk_bdev_iscsi.so 00:04:04.280 LIB libspdk_bdev_virtio.a 00:04:04.280 SO libspdk_bdev_virtio.so.5.0 00:04:04.542 SYMLINK libspdk_bdev_virtio.so 00:04:04.803 LIB libspdk_bdev_nvme.a 00:04:04.803 SO libspdk_bdev_nvme.so.6.0 00:04:05.064 SYMLINK libspdk_bdev_nvme.so 00:04:05.324 CC module/event/subsystems/sock/sock.o 00:04:05.324 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:05.324 CC module/event/subsystems/iobuf/iobuf.o 00:04:05.324 CC module/event/subsystems/vmd/vmd.o 00:04:05.324 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:05.324 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:05.324 CC module/event/subsystems/scheduler/scheduler.o 00:04:05.324 LIB libspdk_event_scheduler.a 00:04:05.584 LIB libspdk_event_sock.a 00:04:05.584 LIB libspdk_event_vmd.a 00:04:05.584 LIB libspdk_event_vhost_blk.a 00:04:05.584 SO libspdk_event_scheduler.so.3.0 00:04:05.584 LIB libspdk_event_iobuf.a 00:04:05.584 SO libspdk_event_vmd.so.5.0 00:04:05.584 SO libspdk_event_sock.so.4.0 00:04:05.584 SO libspdk_event_vhost_blk.so.2.0 00:04:05.584 SO libspdk_event_iobuf.so.2.0 00:04:05.584 SYMLINK libspdk_event_scheduler.so 00:04:05.584 SYMLINK libspdk_event_sock.so 00:04:05.584 SYMLINK libspdk_event_vmd.so 00:04:05.584 SYMLINK libspdk_event_vhost_blk.so 00:04:05.584 SYMLINK libspdk_event_iobuf.so 00:04:05.584 CC module/event/subsystems/accel/accel.o 00:04:05.845 LIB libspdk_event_accel.a 00:04:05.845 SO libspdk_event_accel.so.5.0 00:04:05.845 SYMLINK libspdk_event_accel.so 00:04:06.105 CC module/event/subsystems/bdev/bdev.o 00:04:06.105 LIB libspdk_event_bdev.a 00:04:06.105 SO libspdk_event_bdev.so.5.0 00:04:06.425 SYMLINK libspdk_event_bdev.so 00:04:06.426 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:06.426 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:06.426 CC module/event/subsystems/nbd/nbd.o 00:04:06.426 CC module/event/subsystems/scsi/scsi.o 00:04:06.426 CC module/event/subsystems/ublk/ublk.o 00:04:06.426 LIB libspdk_event_scsi.a 00:04:06.426 LIB libspdk_event_nbd.a 00:04:06.426 LIB libspdk_event_ublk.a 00:04:06.696 SO libspdk_event_scsi.so.5.0 00:04:06.696 SO libspdk_event_nbd.so.5.0 00:04:06.696 SO libspdk_event_ublk.so.2.0 00:04:06.696 SYMLINK libspdk_event_nbd.so 00:04:06.696 SYMLINK libspdk_event_ublk.so 00:04:06.696 SYMLINK libspdk_event_scsi.so 00:04:06.696 LIB libspdk_event_nvmf.a 00:04:06.696 SO libspdk_event_nvmf.so.5.0 00:04:06.696 SYMLINK libspdk_event_nvmf.so 00:04:06.696 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.697 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.958 LIB libspdk_event_vhost_scsi.a 00:04:06.958 LIB libspdk_event_iscsi.a 00:04:06.958 SO libspdk_event_vhost_scsi.so.2.0 00:04:06.958 SO libspdk_event_iscsi.so.5.0 00:04:06.958 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.958 SYMLINK libspdk_event_iscsi.so 00:04:06.958 SO libspdk.so.5.0 00:04:06.958 SYMLINK libspdk.so 00:04:07.220 CXX app/trace/trace.o 00:04:07.220 CC app/trace_record/trace_record.o 00:04:07.220 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.220 CC app/nvmf_tgt/nvmf_main.o 00:04:07.220 CC app/spdk_tgt/spdk_tgt.o 00:04:07.220 CC examples/accel/perf/accel_perf.o 00:04:07.220 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.220 CC examples/blob/hello_world/hello_blob.o 00:04:07.220 CC test/accel/dif/dif.o 00:04:07.220 CC test/app/bdev_svc/bdev_svc.o 00:04:07.481 LINK nvmf_tgt 00:04:07.481 LINK iscsi_tgt 00:04:07.481 LINK spdk_tgt 00:04:07.481 LINK bdev_svc 00:04:07.481 LINK spdk_trace_record 00:04:07.481 LINK hello_bdev 00:04:07.481 LINK hello_blob 00:04:07.481 LINK spdk_trace 00:04:07.740 CC app/spdk_lspci/spdk_lspci.o 00:04:07.740 CC test/app/histogram_perf/histogram_perf.o 00:04:07.740 LINK dif 00:04:07.740 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.740 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.740 LINK accel_perf 00:04:07.740 CC examples/blob/cli/blobcli.o 00:04:07.740 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:07.740 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:07.740 CC app/spdk_nvme_perf/perf.o 00:04:07.740 LINK spdk_lspci 00:04:07.740 LINK histogram_perf 00:04:08.001 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.001 CC test/app/jsoncat/jsoncat.o 00:04:08.001 CC test/app/stub/stub.o 00:04:08.001 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.001 CC app/spdk_nvme_identify/identify.o 00:04:08.001 LINK jsoncat 00:04:08.262 LINK nvme_fuzz 00:04:08.262 LINK stub 00:04:08.262 LINK spdk_nvme_discover 00:04:08.262 CC app/spdk_top/spdk_top.o 00:04:08.262 LINK vhost_fuzz 00:04:08.262 LINK blobcli 00:04:08.523 CC examples/ioat/perf/perf.o 00:04:08.523 CC examples/nvme/hello_world/hello_world.o 00:04:08.523 CC examples/nvme/reconnect/reconnect.o 00:04:08.523 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.523 CC examples/nvme/arbitration/arbitration.o 00:04:08.523 LINK bdevperf 00:04:08.523 LINK ioat_perf 00:04:08.523 LINK hello_world 00:04:08.785 LINK spdk_nvme_perf 00:04:08.785 LINK reconnect 00:04:08.785 CC examples/ioat/verify/verify.o 00:04:08.785 CC examples/sock/hello_world/hello_sock.o 00:04:08.785 LINK arbitration 00:04:08.785 CC examples/vmd/lsvmd/lsvmd.o 00:04:08.785 LINK spdk_nvme_identify 00:04:09.047 CC examples/nvmf/nvmf/nvmf.o 00:04:09.047 LINK nvme_manage 00:04:09.047 LINK lsvmd 00:04:09.047 LINK verify 00:04:09.047 CC app/vhost/vhost.o 00:04:09.047 LINK hello_sock 00:04:09.047 CC app/spdk_dd/spdk_dd.o 00:04:09.047 CC examples/vmd/led/led.o 00:04:09.047 CC examples/nvme/hotplug/hotplug.o 00:04:09.308 LINK vhost 00:04:09.309 LINK led 00:04:09.309 LINK spdk_top 00:04:09.309 CC examples/util/zipf/zipf.o 00:04:09.309 LINK nvmf 00:04:09.309 CC examples/thread/thread/thread_ex.o 00:04:09.309 CC app/fio/nvme/fio_plugin.o 00:04:09.309 LINK zipf 00:04:09.309 LINK hotplug 00:04:09.568 CC app/fio/bdev/fio_plugin.o 00:04:09.568 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.568 LINK spdk_dd 00:04:09.568 CC examples/idxd/perf/perf.o 00:04:09.568 LINK thread 00:04:09.568 LINK iscsi_fuzz 00:04:09.568 CC test/bdev/bdevio/bdevio.o 00:04:09.568 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:09.568 LINK interrupt_tgt 00:04:09.827 CC test/blobfs/mkfs/mkfs.o 00:04:09.827 TEST_HEADER include/spdk/accel.h 00:04:09.827 TEST_HEADER include/spdk/accel_module.h 00:04:09.827 TEST_HEADER include/spdk/assert.h 00:04:09.827 TEST_HEADER include/spdk/barrier.h 00:04:09.827 TEST_HEADER include/spdk/base64.h 00:04:09.827 TEST_HEADER include/spdk/bdev.h 00:04:09.827 TEST_HEADER include/spdk/bdev_module.h 00:04:09.827 TEST_HEADER include/spdk/bdev_zone.h 00:04:09.827 TEST_HEADER include/spdk/bit_array.h 00:04:09.827 TEST_HEADER include/spdk/bit_pool.h 00:04:09.827 TEST_HEADER include/spdk/blob_bdev.h 00:04:09.827 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:09.827 TEST_HEADER include/spdk/blobfs.h 00:04:09.827 TEST_HEADER include/spdk/blob.h 00:04:09.827 TEST_HEADER include/spdk/conf.h 00:04:09.827 TEST_HEADER include/spdk/config.h 00:04:09.827 TEST_HEADER include/spdk/cpuset.h 00:04:09.827 TEST_HEADER include/spdk/crc16.h 00:04:09.827 TEST_HEADER include/spdk/crc32.h 00:04:09.827 TEST_HEADER include/spdk/crc64.h 00:04:09.827 TEST_HEADER include/spdk/dif.h 00:04:09.827 TEST_HEADER include/spdk/dma.h 00:04:09.827 TEST_HEADER include/spdk/endian.h 00:04:09.827 TEST_HEADER include/spdk/env_dpdk.h 00:04:09.827 TEST_HEADER include/spdk/env.h 00:04:09.827 TEST_HEADER include/spdk/event.h 00:04:09.827 TEST_HEADER include/spdk/fd_group.h 00:04:09.827 TEST_HEADER include/spdk/fd.h 00:04:09.827 TEST_HEADER include/spdk/file.h 00:04:09.827 TEST_HEADER include/spdk/ftl.h 00:04:09.827 TEST_HEADER include/spdk/gpt_spec.h 00:04:09.827 TEST_HEADER include/spdk/hexlify.h 00:04:09.827 TEST_HEADER include/spdk/histogram_data.h 00:04:09.827 TEST_HEADER include/spdk/idxd.h 00:04:09.827 LINK cmb_copy 00:04:09.827 TEST_HEADER include/spdk/idxd_spec.h 00:04:09.827 TEST_HEADER include/spdk/init.h 00:04:09.828 TEST_HEADER include/spdk/ioat.h 00:04:09.828 TEST_HEADER include/spdk/ioat_spec.h 00:04:09.828 TEST_HEADER include/spdk/iscsi_spec.h 00:04:09.828 TEST_HEADER include/spdk/json.h 00:04:09.828 TEST_HEADER include/spdk/jsonrpc.h 00:04:09.828 TEST_HEADER include/spdk/likely.h 00:04:09.828 TEST_HEADER include/spdk/log.h 00:04:09.828 TEST_HEADER include/spdk/lvol.h 00:04:09.828 TEST_HEADER include/spdk/memory.h 00:04:09.828 TEST_HEADER include/spdk/mmio.h 00:04:09.828 TEST_HEADER include/spdk/nbd.h 00:04:09.828 TEST_HEADER include/spdk/notify.h 00:04:09.828 TEST_HEADER include/spdk/nvme.h 00:04:09.828 TEST_HEADER include/spdk/nvme_intel.h 00:04:09.828 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:09.828 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:09.828 LINK idxd_perf 00:04:09.828 TEST_HEADER include/spdk/nvme_spec.h 00:04:09.828 TEST_HEADER include/spdk/nvme_zns.h 00:04:09.828 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:09.828 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:09.828 TEST_HEADER include/spdk/nvmf.h 00:04:09.828 TEST_HEADER include/spdk/nvmf_spec.h 00:04:09.828 TEST_HEADER include/spdk/nvmf_transport.h 00:04:09.828 CC examples/nvme/abort/abort.o 00:04:09.828 TEST_HEADER include/spdk/opal.h 00:04:09.828 TEST_HEADER include/spdk/opal_spec.h 00:04:09.828 TEST_HEADER include/spdk/pci_ids.h 00:04:09.828 CC test/dma/test_dma/test_dma.o 00:04:09.828 TEST_HEADER include/spdk/pipe.h 00:04:09.828 TEST_HEADER include/spdk/queue.h 00:04:09.828 TEST_HEADER include/spdk/reduce.h 00:04:09.828 TEST_HEADER include/spdk/rpc.h 00:04:09.828 TEST_HEADER include/spdk/scheduler.h 00:04:09.828 TEST_HEADER include/spdk/scsi.h 00:04:09.828 TEST_HEADER include/spdk/scsi_spec.h 00:04:09.828 TEST_HEADER include/spdk/sock.h 00:04:09.828 TEST_HEADER include/spdk/stdinc.h 00:04:09.828 TEST_HEADER include/spdk/string.h 00:04:09.828 TEST_HEADER include/spdk/thread.h 00:04:09.828 TEST_HEADER include/spdk/trace.h 00:04:09.828 TEST_HEADER include/spdk/trace_parser.h 00:04:09.828 TEST_HEADER include/spdk/tree.h 00:04:09.828 TEST_HEADER include/spdk/ublk.h 00:04:09.828 TEST_HEADER include/spdk/util.h 00:04:09.828 TEST_HEADER include/spdk/uuid.h 00:04:09.828 TEST_HEADER include/spdk/version.h 00:04:09.828 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.828 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:09.828 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:09.828 LINK mkfs 00:04:09.828 TEST_HEADER include/spdk/vhost.h 00:04:09.828 TEST_HEADER include/spdk/vmd.h 00:04:09.828 TEST_HEADER include/spdk/xor.h 00:04:09.828 TEST_HEADER include/spdk/zipf.h 00:04:09.828 CXX test/cpp_headers/accel.o 00:04:09.828 CXX test/cpp_headers/accel_module.o 00:04:09.828 LINK spdk_nvme 00:04:09.828 LINK spdk_bdev 00:04:10.086 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:10.086 CC test/env/vtophys/vtophys.o 00:04:10.086 LINK bdevio 00:04:10.086 CXX test/cpp_headers/assert.o 00:04:10.086 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:10.086 CC test/event/event_perf/event_perf.o 00:04:10.086 LINK pmr_persistence 00:04:10.086 LINK vtophys 00:04:10.086 CC test/lvol/esnap/esnap.o 00:04:10.086 CXX test/cpp_headers/barrier.o 00:04:10.086 CXX test/cpp_headers/base64.o 00:04:10.086 LINK test_dma 00:04:10.345 LINK abort 00:04:10.345 LINK env_dpdk_post_init 00:04:10.345 LINK event_perf 00:04:10.345 CXX test/cpp_headers/bdev.o 00:04:10.345 LINK mem_callbacks 00:04:10.345 CC test/nvme/aer/aer.o 00:04:10.345 CC test/rpc_client/rpc_client_test.o 00:04:10.345 CC test/event/reactor/reactor.o 00:04:10.345 CC test/env/memory/memory_ut.o 00:04:10.345 CC test/env/pci/pci_ut.o 00:04:10.345 CC test/nvme/reset/reset.o 00:04:10.603 CXX test/cpp_headers/bdev_module.o 00:04:10.603 LINK reactor 00:04:10.603 CC test/thread/poller_perf/poller_perf.o 00:04:10.603 CC test/event/reactor_perf/reactor_perf.o 00:04:10.603 LINK rpc_client_test 00:04:10.603 CXX test/cpp_headers/bdev_zone.o 00:04:10.603 LINK reactor_perf 00:04:10.603 LINK aer 00:04:10.603 LINK poller_perf 00:04:10.603 CXX test/cpp_headers/bit_array.o 00:04:10.603 CC test/event/app_repeat/app_repeat.o 00:04:10.603 LINK reset 00:04:10.861 CXX test/cpp_headers/bit_pool.o 00:04:10.861 CC test/nvme/sgl/sgl.o 00:04:10.861 CC test/nvme/e2edp/nvme_dp.o 00:04:10.861 CC test/nvme/overhead/overhead.o 00:04:10.861 CC test/nvme/err_injection/err_injection.o 00:04:10.861 LINK pci_ut 00:04:10.861 LINK app_repeat 00:04:10.861 CC test/nvme/startup/startup.o 00:04:10.861 CXX test/cpp_headers/blob_bdev.o 00:04:10.861 LINK err_injection 00:04:10.861 LINK overhead 00:04:11.120 CXX test/cpp_headers/blobfs_bdev.o 00:04:11.120 LINK sgl 00:04:11.120 LINK nvme_dp 00:04:11.120 LINK startup 00:04:11.120 CC test/event/scheduler/scheduler.o 00:04:11.120 CXX test/cpp_headers/blobfs.o 00:04:11.120 CC test/nvme/reserve/reserve.o 00:04:11.120 CXX test/cpp_headers/blob.o 00:04:11.120 CXX test/cpp_headers/conf.o 00:04:11.120 CC test/nvme/simple_copy/simple_copy.o 00:04:11.120 CXX test/cpp_headers/config.o 00:04:11.120 LINK memory_ut 00:04:11.120 CC test/nvme/connect_stress/connect_stress.o 00:04:11.120 CC test/nvme/boot_partition/boot_partition.o 00:04:11.378 CXX test/cpp_headers/cpuset.o 00:04:11.378 LINK scheduler 00:04:11.378 CC test/nvme/compliance/nvme_compliance.o 00:04:11.378 CXX test/cpp_headers/crc16.o 00:04:11.378 LINK reserve 00:04:11.378 CXX test/cpp_headers/crc32.o 00:04:11.378 LINK boot_partition 00:04:11.378 CXX test/cpp_headers/crc64.o 00:04:11.378 LINK connect_stress 00:04:11.378 LINK simple_copy 00:04:11.378 CXX test/cpp_headers/dif.o 00:04:11.378 CXX test/cpp_headers/dma.o 00:04:11.378 CC test/nvme/fused_ordering/fused_ordering.o 00:04:11.378 CXX test/cpp_headers/endian.o 00:04:11.378 CXX test/cpp_headers/env_dpdk.o 00:04:11.378 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:11.636 CC test/nvme/fdp/fdp.o 00:04:11.636 CC test/nvme/cuse/cuse.o 00:04:11.636 CXX test/cpp_headers/env.o 00:04:11.636 CXX test/cpp_headers/event.o 00:04:11.636 LINK nvme_compliance 00:04:11.636 CXX test/cpp_headers/fd_group.o 00:04:11.636 CXX test/cpp_headers/fd.o 00:04:11.636 LINK fused_ordering 00:04:11.636 CXX test/cpp_headers/file.o 00:04:11.636 LINK doorbell_aers 00:04:11.636 CXX test/cpp_headers/ftl.o 00:04:11.636 CXX test/cpp_headers/gpt_spec.o 00:04:11.636 CXX test/cpp_headers/hexlify.o 00:04:11.636 CXX test/cpp_headers/histogram_data.o 00:04:11.636 CXX test/cpp_headers/idxd.o 00:04:11.894 CXX test/cpp_headers/idxd_spec.o 00:04:11.894 CXX test/cpp_headers/init.o 00:04:11.894 LINK fdp 00:04:11.894 CXX test/cpp_headers/ioat.o 00:04:11.894 CXX test/cpp_headers/ioat_spec.o 00:04:11.894 CXX test/cpp_headers/iscsi_spec.o 00:04:11.894 CXX test/cpp_headers/json.o 00:04:11.894 CXX test/cpp_headers/jsonrpc.o 00:04:11.894 CXX test/cpp_headers/likely.o 00:04:11.894 CXX test/cpp_headers/log.o 00:04:11.894 CXX test/cpp_headers/lvol.o 00:04:11.894 CXX test/cpp_headers/memory.o 00:04:11.894 CXX test/cpp_headers/mmio.o 00:04:11.894 CXX test/cpp_headers/nbd.o 00:04:11.894 CXX test/cpp_headers/notify.o 00:04:11.894 CXX test/cpp_headers/nvme.o 00:04:12.153 CXX test/cpp_headers/nvme_intel.o 00:04:12.153 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.153 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.153 CXX test/cpp_headers/nvme_spec.o 00:04:12.153 CXX test/cpp_headers/nvme_zns.o 00:04:12.153 CXX test/cpp_headers/nvmf_cmd.o 00:04:12.153 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:12.153 CXX test/cpp_headers/nvmf.o 00:04:12.153 CXX test/cpp_headers/nvmf_spec.o 00:04:12.153 CXX test/cpp_headers/nvmf_transport.o 00:04:12.153 CXX test/cpp_headers/opal.o 00:04:12.153 CXX test/cpp_headers/opal_spec.o 00:04:12.153 CXX test/cpp_headers/pci_ids.o 00:04:12.153 CXX test/cpp_headers/pipe.o 00:04:12.153 CXX test/cpp_headers/queue.o 00:04:12.411 CXX test/cpp_headers/reduce.o 00:04:12.411 CXX test/cpp_headers/rpc.o 00:04:12.411 CXX test/cpp_headers/scheduler.o 00:04:12.411 CXX test/cpp_headers/scsi.o 00:04:12.411 CXX test/cpp_headers/scsi_spec.o 00:04:12.411 CXX test/cpp_headers/sock.o 00:04:12.411 CXX test/cpp_headers/stdinc.o 00:04:12.411 CXX test/cpp_headers/string.o 00:04:12.411 LINK cuse 00:04:12.411 CXX test/cpp_headers/thread.o 00:04:12.411 CXX test/cpp_headers/trace.o 00:04:12.411 CXX test/cpp_headers/trace_parser.o 00:04:12.411 CXX test/cpp_headers/tree.o 00:04:12.411 CXX test/cpp_headers/ublk.o 00:04:12.411 CXX test/cpp_headers/util.o 00:04:12.411 CXX test/cpp_headers/uuid.o 00:04:12.411 CXX test/cpp_headers/version.o 00:04:12.672 CXX test/cpp_headers/vfio_user_pci.o 00:04:12.672 CXX test/cpp_headers/vfio_user_spec.o 00:04:12.672 CXX test/cpp_headers/vhost.o 00:04:12.672 CXX test/cpp_headers/vmd.o 00:04:12.672 CXX test/cpp_headers/xor.o 00:04:12.672 CXX test/cpp_headers/zipf.o 00:04:14.668 LINK esnap 00:04:14.929 00:04:14.929 real 0m43.121s 00:04:14.929 user 3m58.696s 00:04:14.929 sys 0m49.698s 00:04:14.929 14:50:38 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:14.929 14:50:38 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.929 ************************************ 00:04:14.929 END TEST make 00:04:14.929 ************************************ 00:04:15.192 14:50:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:15.192 14:50:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:15.192 14:50:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:15.192 14:50:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:15.192 14:50:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:15.192 14:50:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:15.192 14:50:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:15.192 14:50:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:15.192 14:50:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:15.192 14:50:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.192 14:50:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:15.192 14:50:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:15.192 14:50:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:15.192 14:50:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:15.192 14:50:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:15.192 14:50:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:15.192 14:50:38 -- scripts/common.sh@344 -- # : 1 00:04:15.192 14:50:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:15.192 14:50:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.192 14:50:38 -- scripts/common.sh@364 -- # decimal 1 00:04:15.192 14:50:38 -- scripts/common.sh@352 -- # local d=1 00:04:15.192 14:50:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.192 14:50:38 -- scripts/common.sh@354 -- # echo 1 00:04:15.192 14:50:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:15.192 14:50:38 -- scripts/common.sh@365 -- # decimal 2 00:04:15.192 14:50:38 -- scripts/common.sh@352 -- # local d=2 00:04:15.192 14:50:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.192 14:50:38 -- scripts/common.sh@354 -- # echo 2 00:04:15.192 14:50:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:15.192 14:50:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:15.192 14:50:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:15.192 14:50:38 -- scripts/common.sh@367 -- # return 0 00:04:15.192 14:50:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.192 14:50:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.192 --rc genhtml_branch_coverage=1 00:04:15.192 --rc genhtml_function_coverage=1 00:04:15.192 --rc genhtml_legend=1 00:04:15.192 --rc geninfo_all_blocks=1 00:04:15.192 --rc geninfo_unexecuted_blocks=1 00:04:15.192 00:04:15.192 ' 00:04:15.192 14:50:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.192 --rc genhtml_branch_coverage=1 00:04:15.192 --rc genhtml_function_coverage=1 00:04:15.192 --rc genhtml_legend=1 00:04:15.192 --rc geninfo_all_blocks=1 00:04:15.192 --rc geninfo_unexecuted_blocks=1 00:04:15.192 00:04:15.192 ' 00:04:15.192 14:50:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.192 --rc genhtml_branch_coverage=1 00:04:15.192 --rc genhtml_function_coverage=1 00:04:15.192 --rc genhtml_legend=1 00:04:15.192 --rc geninfo_all_blocks=1 00:04:15.192 --rc geninfo_unexecuted_blocks=1 00:04:15.192 00:04:15.192 ' 00:04:15.192 14:50:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.192 --rc genhtml_branch_coverage=1 00:04:15.192 --rc genhtml_function_coverage=1 00:04:15.192 --rc genhtml_legend=1 00:04:15.192 --rc geninfo_all_blocks=1 00:04:15.192 --rc geninfo_unexecuted_blocks=1 00:04:15.192 00:04:15.192 ' 00:04:15.192 14:50:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:15.192 14:50:38 -- nvmf/common.sh@7 -- # uname -s 00:04:15.192 14:50:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.192 14:50:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.192 14:50:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.192 14:50:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.192 14:50:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.192 14:50:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.192 14:50:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.192 14:50:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.192 14:50:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.192 14:50:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.192 14:50:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b078765-971d-4fa4-a361-94e987528f08 00:04:15.192 14:50:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b078765-971d-4fa4-a361-94e987528f08 00:04:15.192 14:50:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.192 14:50:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.192 14:50:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.193 14:50:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:15.193 14:50:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.193 14:50:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.193 14:50:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.193 14:50:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.193 14:50:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.193 14:50:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.193 14:50:38 -- paths/export.sh@5 -- # export PATH 00:04:15.193 14:50:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.193 14:50:38 -- nvmf/common.sh@46 -- # : 0 00:04:15.193 14:50:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:15.193 14:50:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:15.193 14:50:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:15.193 14:50:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.193 14:50:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.193 14:50:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:15.193 14:50:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:15.193 14:50:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:15.193 14:50:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:15.193 14:50:38 -- spdk/autotest.sh@32 -- # uname -s 00:04:15.193 14:50:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:15.193 14:50:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:15.193 14:50:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.193 14:50:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:15.193 14:50:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.193 14:50:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:15.193 14:50:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:15.193 14:50:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:15.193 14:50:38 -- spdk/autotest.sh@48 -- # udevadm_pid=60584 00:04:15.193 14:50:38 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:15.193 14:50:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:15.193 14:50:38 -- spdk/autotest.sh@54 -- # echo 60598 00:04:15.193 14:50:38 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:15.193 14:50:38 -- spdk/autotest.sh@56 -- # echo 60603 00:04:15.193 14:50:38 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:15.193 14:50:38 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:15.193 14:50:38 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:15.193 14:50:38 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:15.193 14:50:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.193 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.193 14:50:38 -- spdk/autotest.sh@70 -- # create_test_list 00:04:15.193 14:50:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:15.193 14:50:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.193 14:50:38 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:15.193 14:50:38 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:15.193 14:50:38 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:15.193 14:50:38 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:15.193 14:50:38 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:15.193 14:50:38 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:15.193 14:50:38 -- common/autotest_common.sh@1450 -- # uname 00:04:15.193 14:50:38 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:15.193 14:50:38 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:15.193 14:50:38 -- common/autotest_common.sh@1470 -- # uname 00:04:15.193 14:50:38 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:15.193 14:50:38 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:15.193 14:50:38 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:15.453 lcov: LCOV version 1.15 00:04:15.453 14:50:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:22.023 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:22.023 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:22.024 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:22.024 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:22.024 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:22.024 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:43.968 14:51:03 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:43.968 14:51:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.968 14:51:03 -- common/autotest_common.sh@10 -- # set +x 00:04:43.968 14:51:03 -- spdk/autotest.sh@89 -- # rm -f 00:04:43.968 14:51:03 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.968 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:43.968 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:43.968 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:43.968 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:43.968 14:51:04 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:43.968 14:51:04 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:43.968 14:51:04 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:43.968 14:51:04 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.968 14:51:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:43.968 14:51:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:43.968 14:51:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.968 14:51:04 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:43.968 14:51:04 -- spdk/autotest.sh@108 -- # grep -v p 00:04:43.968 14:51:04 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:43.968 14:51:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.968 14:51:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.968 14:51:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:43.968 14:51:04 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:43.968 14:51:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.968 No valid GPT data, bailing 00:04:43.968 14:51:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.968 14:51:04 -- scripts/common.sh@393 -- # pt= 00:04:43.968 14:51:04 -- scripts/common.sh@394 -- # return 1 00:04:43.968 14:51:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00961291 s, 109 MB/s 00:04:43.969 14:51:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.969 14:51:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.969 14:51:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:43.969 14:51:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:43.969 14:51:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:43.969 No valid GPT data, bailing 00:04:43.969 14:51:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:43.969 14:51:04 -- scripts/common.sh@393 -- # pt= 00:04:43.969 14:51:04 -- scripts/common.sh@394 -- # return 1 00:04:43.969 14:51:04 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379464 s, 276 MB/s 00:04:43.969 14:51:04 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.969 14:51:04 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.969 14:51:04 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:43.969 14:51:04 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:43.969 14:51:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:43.969 No valid GPT data, bailing 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # pt= 00:04:43.969 14:51:05 -- scripts/common.sh@394 -- # return 1 00:04:43.969 14:51:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460718 s, 228 MB/s 00:04:43.969 14:51:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.969 14:51:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.969 14:51:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:04:43.969 14:51:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:43.969 14:51:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:43.969 No valid GPT data, bailing 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # pt= 00:04:43.969 14:51:05 -- scripts/common.sh@394 -- # return 1 00:04:43.969 14:51:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397695 s, 264 MB/s 00:04:43.969 14:51:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.969 14:51:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.969 14:51:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:04:43.969 14:51:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:43.969 14:51:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:43.969 No valid GPT data, bailing 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # pt= 00:04:43.969 14:51:05 -- scripts/common.sh@394 -- # return 1 00:04:43.969 14:51:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413252 s, 254 MB/s 00:04:43.969 14:51:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:43.969 14:51:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:43.969 14:51:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:43.969 14:51:05 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:43.969 14:51:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:43.969 No valid GPT data, bailing 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:43.969 14:51:05 -- scripts/common.sh@393 -- # pt= 00:04:43.969 14:51:05 -- scripts/common.sh@394 -- # return 1 00:04:43.969 14:51:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:43.969 1+0 records in 00:04:43.969 1+0 records out 00:04:43.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004575 s, 229 MB/s 00:04:43.969 14:51:05 -- spdk/autotest.sh@116 -- # sync 00:04:43.969 14:51:05 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:43.969 14:51:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:43.969 14:51:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:43.969 14:51:06 -- spdk/autotest.sh@122 -- # uname -s 00:04:43.969 14:51:06 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:43.969 14:51:06 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:43.969 14:51:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.969 14:51:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.969 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:04:43.969 ************************************ 00:04:43.969 START TEST setup.sh 00:04:43.969 ************************************ 00:04:43.969 14:51:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:43.969 * Looking for test storage... 00:04:43.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.969 14:51:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.969 14:51:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.969 14:51:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.969 14:51:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.969 14:51:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.969 14:51:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.969 14:51:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.969 14:51:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.969 14:51:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.969 14:51:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.969 14:51:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.969 14:51:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.969 14:51:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.969 14:51:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.969 14:51:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.969 14:51:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.969 14:51:07 -- scripts/common.sh@344 -- # : 1 00:04:43.969 14:51:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.969 14:51:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.969 14:51:07 -- scripts/common.sh@364 -- # decimal 1 00:04:43.969 14:51:07 -- scripts/common.sh@352 -- # local d=1 00:04:43.969 14:51:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.969 14:51:07 -- scripts/common.sh@354 -- # echo 1 00:04:43.969 14:51:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.969 14:51:07 -- scripts/common.sh@365 -- # decimal 2 00:04:43.969 14:51:07 -- scripts/common.sh@352 -- # local d=2 00:04:43.969 14:51:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.969 14:51:07 -- scripts/common.sh@354 -- # echo 2 00:04:43.969 14:51:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.970 14:51:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.970 14:51:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.970 14:51:07 -- scripts/common.sh@367 -- # return 0 00:04:43.970 14:51:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- setup/test-setup.sh@10 -- # uname -s 00:04:43.970 14:51:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:43.970 14:51:07 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:43.970 14:51:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.970 14:51:07 -- common/autotest_common.sh@10 -- # set +x 00:04:43.970 ************************************ 00:04:43.970 START TEST acl 00:04:43.970 ************************************ 00:04:43.970 14:51:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:43.970 * Looking for test storage... 00:04:43.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.970 14:51:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.970 14:51:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.970 14:51:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.970 14:51:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.970 14:51:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.970 14:51:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.970 14:51:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.970 14:51:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.970 14:51:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.970 14:51:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.970 14:51:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.970 14:51:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.970 14:51:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.970 14:51:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.970 14:51:07 -- scripts/common.sh@344 -- # : 1 00:04:43.970 14:51:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.970 14:51:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.970 14:51:07 -- scripts/common.sh@364 -- # decimal 1 00:04:43.970 14:51:07 -- scripts/common.sh@352 -- # local d=1 00:04:43.970 14:51:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.970 14:51:07 -- scripts/common.sh@354 -- # echo 1 00:04:43.970 14:51:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.970 14:51:07 -- scripts/common.sh@365 -- # decimal 2 00:04:43.970 14:51:07 -- scripts/common.sh@352 -- # local d=2 00:04:43.970 14:51:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.970 14:51:07 -- scripts/common.sh@354 -- # echo 2 00:04:43.970 14:51:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.970 14:51:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.970 14:51:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.970 14:51:07 -- scripts/common.sh@367 -- # return 0 00:04:43.970 14:51:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.970 --rc genhtml_branch_coverage=1 00:04:43.970 --rc genhtml_function_coverage=1 00:04:43.970 --rc genhtml_legend=1 00:04:43.970 --rc geninfo_all_blocks=1 00:04:43.970 --rc geninfo_unexecuted_blocks=1 00:04:43.970 00:04:43.970 ' 00:04:43.970 14:51:07 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:43.970 14:51:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:43.970 14:51:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:43.970 14:51:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:43.970 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:43.970 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:43.970 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:43.970 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:43.970 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.970 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.970 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:43.971 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:43.971 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:43.971 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.971 14:51:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.971 14:51:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:43.971 14:51:07 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:43.971 14:51:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:43.971 14:51:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.971 14:51:07 -- setup/acl.sh@12 -- # devs=() 00:04:43.971 14:51:07 -- setup/acl.sh@12 -- # declare -a devs 00:04:43.971 14:51:07 -- setup/acl.sh@13 -- # drivers=() 00:04:43.971 14:51:07 -- setup/acl.sh@13 -- # declare -A drivers 00:04:43.971 14:51:07 -- setup/acl.sh@51 -- # setup reset 00:04:43.971 14:51:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.971 14:51:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.907 14:51:08 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:44.907 14:51:08 -- setup/acl.sh@16 -- # local dev driver 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.907 14:51:08 -- setup/acl.sh@15 -- # setup output status 00:04:44.907 14:51:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.907 14:51:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:44.907 Hugepages 00:04:44.907 node hugesize free / total 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # continue 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.907 00:04:44.907 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # continue 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:44.907 14:51:08 -- setup/acl.sh@20 -- # continue 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:44.907 14:51:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:44.907 14:51:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.907 14:51:08 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:44.907 14:51:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:44.907 14:51:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:44.907 14:51:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:44.907 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.166 14:51:08 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:45.166 14:51:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:45.166 14:51:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:45.166 14:51:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:45.166 14:51:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:45.166 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.166 14:51:08 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:45.166 14:51:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:45.166 14:51:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:45.166 14:51:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:45.166 14:51:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:45.166 14:51:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.166 14:51:08 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:45.166 14:51:08 -- setup/acl.sh@54 -- # run_test denied denied 00:04:45.166 14:51:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.166 14:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.166 14:51:08 -- common/autotest_common.sh@10 -- # set +x 00:04:45.166 ************************************ 00:04:45.166 START TEST denied 00:04:45.166 ************************************ 00:04:45.166 14:51:08 -- common/autotest_common.sh@1114 -- # denied 00:04:45.166 14:51:08 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:45.166 14:51:08 -- setup/acl.sh@38 -- # setup output config 00:04:45.166 14:51:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.166 14:51:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.166 14:51:08 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:46.102 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:46.102 14:51:09 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:46.102 14:51:09 -- setup/acl.sh@28 -- # local dev driver 00:04:46.102 14:51:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:46.102 14:51:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:46.102 14:51:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:46.102 14:51:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:46.102 14:51:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:46.102 14:51:09 -- setup/acl.sh@41 -- # setup reset 00:04:46.102 14:51:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.102 14:51:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.676 00:04:52.676 real 0m6.801s 00:04:52.676 user 0m0.675s 00:04:52.676 sys 0m1.140s 00:04:52.676 14:51:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.676 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.676 ************************************ 00:04:52.676 END TEST denied 00:04:52.676 ************************************ 00:04:52.676 14:51:15 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:52.676 14:51:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.676 14:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.676 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.676 ************************************ 00:04:52.676 START TEST allowed 00:04:52.676 ************************************ 00:04:52.676 14:51:15 -- common/autotest_common.sh@1114 -- # allowed 00:04:52.676 14:51:15 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:52.676 14:51:15 -- setup/acl.sh@45 -- # setup output config 00:04:52.676 14:51:15 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:52.676 14:51:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.676 14:51:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.934 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.934 14:51:16 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:52.934 14:51:16 -- setup/acl.sh@28 -- # local dev driver 00:04:52.934 14:51:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.934 14:51:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.934 14:51:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.934 14:51:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.934 14:51:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.934 14:51:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.934 14:51:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.934 14:51:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:52.934 14:51:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.934 14:51:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.934 14:51:16 -- setup/acl.sh@48 -- # setup reset 00:04:52.934 14:51:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.934 14:51:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.869 00:04:53.869 real 0m1.974s 00:04:53.869 user 0m0.839s 00:04:53.869 sys 0m1.042s 00:04:53.869 ************************************ 00:04:53.869 END TEST allowed 00:04:53.869 ************************************ 00:04:53.869 14:51:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.869 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.135 00:04:54.135 real 0m10.349s 00:04:54.135 user 0m2.200s 00:04:54.135 sys 0m3.072s 00:04:54.135 14:51:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.135 ************************************ 00:04:54.135 END TEST acl 00:04:54.135 ************************************ 00:04:54.135 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.135 14:51:17 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:54.135 14:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.135 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.135 ************************************ 00:04:54.135 START TEST hugepages 00:04:54.135 ************************************ 00:04:54.135 14:51:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:54.135 * Looking for test storage... 00:04:54.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.135 14:51:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.135 14:51:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.135 14:51:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.135 14:51:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.135 14:51:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.135 14:51:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.135 14:51:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.135 14:51:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.135 14:51:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.135 14:51:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.135 14:51:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.135 14:51:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.135 14:51:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.135 14:51:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.135 14:51:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.135 14:51:17 -- scripts/common.sh@344 -- # : 1 00:04:54.135 14:51:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.135 14:51:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.135 14:51:17 -- scripts/common.sh@364 -- # decimal 1 00:04:54.135 14:51:17 -- scripts/common.sh@352 -- # local d=1 00:04:54.135 14:51:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.135 14:51:17 -- scripts/common.sh@354 -- # echo 1 00:04:54.135 14:51:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.135 14:51:17 -- scripts/common.sh@365 -- # decimal 2 00:04:54.135 14:51:17 -- scripts/common.sh@352 -- # local d=2 00:04:54.135 14:51:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.135 14:51:17 -- scripts/common.sh@354 -- # echo 2 00:04:54.135 14:51:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.135 14:51:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.135 14:51:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.135 14:51:17 -- scripts/common.sh@367 -- # return 0 00:04:54.135 14:51:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.135 --rc genhtml_branch_coverage=1 00:04:54.135 --rc genhtml_function_coverage=1 00:04:54.135 --rc genhtml_legend=1 00:04:54.135 --rc geninfo_all_blocks=1 00:04:54.135 --rc geninfo_unexecuted_blocks=1 00:04:54.135 00:04:54.135 ' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.135 --rc genhtml_branch_coverage=1 00:04:54.135 --rc genhtml_function_coverage=1 00:04:54.135 --rc genhtml_legend=1 00:04:54.135 --rc geninfo_all_blocks=1 00:04:54.135 --rc geninfo_unexecuted_blocks=1 00:04:54.135 00:04:54.135 ' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.135 --rc genhtml_branch_coverage=1 00:04:54.135 --rc genhtml_function_coverage=1 00:04:54.135 --rc genhtml_legend=1 00:04:54.135 --rc geninfo_all_blocks=1 00:04:54.135 --rc geninfo_unexecuted_blocks=1 00:04:54.135 00:04:54.135 ' 00:04:54.135 14:51:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.135 --rc genhtml_branch_coverage=1 00:04:54.135 --rc genhtml_function_coverage=1 00:04:54.135 --rc genhtml_legend=1 00:04:54.135 --rc geninfo_all_blocks=1 00:04:54.135 --rc geninfo_unexecuted_blocks=1 00:04:54.135 00:04:54.135 ' 00:04:54.135 14:51:17 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:54.135 14:51:17 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:54.135 14:51:17 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:54.135 14:51:17 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:54.135 14:51:17 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:54.135 14:51:17 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:54.135 14:51:17 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:54.135 14:51:17 -- setup/common.sh@18 -- # local node= 00:04:54.135 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.135 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.135 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.135 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.135 14:51:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.135 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.135 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 4354096 kB' 'MemAvailable: 7322928 kB' 'Buffers: 2684 kB' 'Cached: 3171208 kB' 'SwapCached: 0 kB' 'Active: 465080 kB' 'Inactive: 2824136 kB' 'Active(anon): 125860 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 116768 kB' 'Mapped: 51176 kB' 'Shmem: 10536 kB' 'KReclaimable: 84808 kB' 'Slab: 190248 kB' 'SReclaimable: 84808 kB' 'SUnreclaim: 105440 kB' 'KernelStack: 6816 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 310340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.135 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.135 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.136 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.136 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.406 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.406 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.407 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.407 14:51:17 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.407 14:51:17 -- setup/common.sh@33 -- # echo 2048 00:04:54.407 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:54.407 14:51:17 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:54.407 14:51:17 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:54.407 14:51:17 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:54.407 14:51:17 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:54.407 14:51:17 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:54.407 14:51:17 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:54.407 14:51:17 -- setup/hugepages.sh@207 -- # get_nodes 00:04:54.407 14:51:17 -- setup/hugepages.sh@27 -- # local node 00:04:54.407 14:51:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.407 14:51:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:54.407 14:51:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.407 14:51:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.407 14:51:17 -- setup/hugepages.sh@208 -- # clear_hp 00:04:54.407 14:51:17 -- setup/hugepages.sh@37 -- # local node hp 00:04:54.407 14:51:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.407 14:51:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.407 14:51:17 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.407 14:51:17 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.407 14:51:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.407 14:51:17 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:54.407 14:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.407 14:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.407 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.407 ************************************ 00:04:54.407 START TEST default_setup 00:04:54.407 ************************************ 00:04:54.407 14:51:17 -- common/autotest_common.sh@1114 -- # default_setup 00:04:54.407 14:51:17 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.407 14:51:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.407 14:51:17 -- setup/hugepages.sh@51 -- # shift 00:04:54.407 14:51:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.407 14:51:17 -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.407 14:51:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.407 14:51:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.407 14:51:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.407 14:51:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.407 14:51:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.407 14:51:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.407 14:51:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.407 14:51:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.407 14:51:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.407 14:51:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.407 14:51:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.407 14:51:17 -- setup/hugepages.sh@73 -- # return 0 00:04:54.407 14:51:17 -- setup/hugepages.sh@137 -- # setup output 00:04:54.407 14:51:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.407 14:51:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.243 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.243 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.243 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.243 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.243 14:51:18 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:55.243 14:51:18 -- setup/hugepages.sh@89 -- # local node 00:04:55.243 14:51:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.243 14:51:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.243 14:51:18 -- setup/hugepages.sh@92 -- # local surp 00:04:55.243 14:51:18 -- setup/hugepages.sh@93 -- # local resv 00:04:55.243 14:51:18 -- setup/hugepages.sh@94 -- # local anon 00:04:55.243 14:51:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.243 14:51:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.243 14:51:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.243 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.243 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.243 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.243 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.243 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.243 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.243 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.243 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6447228 kB' 'MemAvailable: 9415900 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467136 kB' 'Inactive: 2824152 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118720 kB' 'Mapped: 51140 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 189864 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105408 kB' 'KernelStack: 6784 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.243 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.243 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.244 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.244 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.244 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.244 14:51:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.244 14:51:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.244 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.244 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.244 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.244 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.244 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.244 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.244 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.244 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.244 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.244 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6447200 kB' 'MemAvailable: 9415876 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466980 kB' 'Inactive: 2824156 kB' 'Active(anon): 127760 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118824 kB' 'Mapped: 51140 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 189864 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105408 kB' 'KernelStack: 6784 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.245 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.245 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.246 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.246 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.247 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.247 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.247 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.247 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.247 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.247 14:51:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.511 14:51:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.511 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.511 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.511 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.511 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.511 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.511 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.511 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.511 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.511 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6447200 kB' 'MemAvailable: 9415876 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 2824156 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 51012 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 189852 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105396 kB' 'KernelStack: 6800 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.511 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.511 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.512 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.512 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.513 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.513 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.513 14:51:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.513 14:51:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.513 nr_hugepages=1024 00:04:55.513 14:51:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.513 resv_hugepages=0 00:04:55.513 14:51:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.513 surplus_hugepages=0 00:04:55.513 14:51:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.513 anon_hugepages=0 00:04:55.513 14:51:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.513 14:51:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.513 14:51:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.513 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.513 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.513 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.513 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.513 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.513 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.513 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.513 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.513 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6447200 kB' 'MemAvailable: 9415876 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466856 kB' 'Inactive: 2824156 kB' 'Active(anon): 127636 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 51012 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 189852 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105396 kB' 'KernelStack: 6784 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.513 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.513 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.514 14:51:18 -- setup/common.sh@33 -- # echo 1024 00:04:55.514 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.514 14:51:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.514 14:51:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.514 14:51:18 -- setup/hugepages.sh@27 -- # local node 00:04:55.514 14:51:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.514 14:51:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.514 14:51:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.514 14:51:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.514 14:51:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.514 14:51:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.514 14:51:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.514 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.514 14:51:18 -- setup/common.sh@18 -- # local node=0 00:04:55.514 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.514 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.514 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.514 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.514 14:51:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.514 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.514 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6447200 kB' 'MemUsed: 5789896 kB' 'SwapCached: 0 kB' 'Active: 466908 kB' 'Inactive: 2824164 kB' 'Active(anon): 127688 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 3173880 kB' 'Mapped: 51012 kB' 'AnonPages: 118756 kB' 'Shmem: 10496 kB' 'KernelStack: 6784 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 189852 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.514 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.514 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.515 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.515 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.515 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.515 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.515 14:51:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.515 14:51:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.515 14:51:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.515 14:51:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.515 14:51:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.515 node0=1024 expecting 1024 00:04:55.515 14:51:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.515 00:04:55.515 real 0m1.135s 00:04:55.515 user 0m0.465s 00:04:55.515 sys 0m0.607s 00:04:55.515 14:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.515 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:55.515 ************************************ 00:04:55.515 END TEST default_setup 00:04:55.515 ************************************ 00:04:55.515 14:51:18 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:55.515 14:51:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.515 14:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.515 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:55.515 ************************************ 00:04:55.515 START TEST per_node_1G_alloc 00:04:55.515 ************************************ 00:04:55.515 14:51:18 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:55.515 14:51:18 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:55.515 14:51:18 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:55.515 14:51:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.515 14:51:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.515 14:51:18 -- setup/hugepages.sh@51 -- # shift 00:04:55.515 14:51:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.515 14:51:18 -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.515 14:51:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.515 14:51:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.515 14:51:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.515 14:51:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.515 14:51:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.515 14:51:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.515 14:51:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.515 14:51:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.515 14:51:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.515 14:51:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.515 14:51:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.515 14:51:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.515 14:51:18 -- setup/hugepages.sh@73 -- # return 0 00:04:55.515 14:51:18 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:55.515 14:51:18 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:55.515 14:51:18 -- setup/hugepages.sh@146 -- # setup output 00:04:55.515 14:51:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.515 14:51:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.040 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.040 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.040 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.040 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.040 14:51:19 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:56.040 14:51:19 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:56.040 14:51:19 -- setup/hugepages.sh@89 -- # local node 00:04:56.040 14:51:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.040 14:51:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.040 14:51:19 -- setup/hugepages.sh@92 -- # local surp 00:04:56.040 14:51:19 -- setup/hugepages.sh@93 -- # local resv 00:04:56.040 14:51:19 -- setup/hugepages.sh@94 -- # local anon 00:04:56.040 14:51:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.040 14:51:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.040 14:51:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.040 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.040 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.040 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.040 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.040 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.040 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.040 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.040 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7503724 kB' 'MemAvailable: 10472408 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467244 kB' 'Inactive: 2824164 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119080 kB' 'Mapped: 51132 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190160 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105704 kB' 'KernelStack: 6804 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55912 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.041 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.041 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.041 14:51:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.041 14:51:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.041 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.041 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.041 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.041 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.041 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.041 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.041 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.041 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.041 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7503724 kB' 'MemAvailable: 10472408 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467196 kB' 'Inactive: 2824164 kB' 'Active(anon): 127976 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119032 kB' 'Mapped: 51016 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190152 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105696 kB' 'KernelStack: 6756 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.042 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.042 14:51:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.042 14:51:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.042 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.042 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.042 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.042 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.042 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.042 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.042 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.042 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.042 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7503892 kB' 'MemAvailable: 10472576 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466868 kB' 'Inactive: 2824164 kB' 'Active(anon): 127648 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118776 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190220 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105764 kB' 'KernelStack: 6800 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.042 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.043 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.044 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.044 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.044 nr_hugepages=512 00:04:56.044 resv_hugepages=0 00:04:56.044 surplus_hugepages=0 00:04:56.044 anon_hugepages=0 00:04:56.044 14:51:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.044 14:51:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:56.044 14:51:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.044 14:51:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.044 14:51:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.044 14:51:19 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.044 14:51:19 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:56.044 14:51:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.044 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.044 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.044 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.044 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.044 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.044 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.044 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.044 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.044 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7504236 kB' 'MemAvailable: 10472920 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466880 kB' 'Inactive: 2824164 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118744 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190220 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105764 kB' 'KernelStack: 6784 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.044 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.044 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.045 14:51:19 -- setup/common.sh@33 -- # echo 512 00:04:56.045 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.045 14:51:19 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.045 14:51:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.045 14:51:19 -- setup/hugepages.sh@27 -- # local node 00:04:56.045 14:51:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.045 14:51:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.045 14:51:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.045 14:51:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.045 14:51:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.045 14:51:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.045 14:51:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.045 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.045 14:51:19 -- setup/common.sh@18 -- # local node=0 00:04:56.045 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.045 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.045 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.045 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.045 14:51:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.045 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.045 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.045 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7504496 kB' 'MemUsed: 4732600 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 2824164 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 3173880 kB' 'Mapped: 50960 kB' 'AnonPages: 118872 kB' 'Shmem: 10496 kB' 'KernelStack: 6800 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190220 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.045 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.045 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.046 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.046 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.046 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.046 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.046 14:51:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.046 14:51:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.046 14:51:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.046 14:51:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.046 node0=512 expecting 512 00:04:56.046 ************************************ 00:04:56.046 END TEST per_node_1G_alloc 00:04:56.046 ************************************ 00:04:56.046 14:51:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.046 14:51:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.046 00:04:56.046 real 0m0.576s 00:04:56.046 user 0m0.254s 00:04:56.046 sys 0m0.339s 00:04:56.046 14:51:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.046 14:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 14:51:19 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:56.046 14:51:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.046 14:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.047 14:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.047 ************************************ 00:04:56.047 START TEST even_2G_alloc 00:04:56.047 ************************************ 00:04:56.047 14:51:19 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:56.047 14:51:19 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:56.047 14:51:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.047 14:51:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.047 14:51:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.047 14:51:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.047 14:51:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.047 14:51:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.047 14:51:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.047 14:51:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.047 14:51:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.047 14:51:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:56.047 14:51:19 -- setup/hugepages.sh@83 -- # : 0 00:04:56.047 14:51:19 -- setup/hugepages.sh@84 -- # : 0 00:04:56.047 14:51:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.047 14:51:19 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:56.047 14:51:19 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:56.047 14:51:19 -- setup/hugepages.sh@153 -- # setup output 00:04:56.047 14:51:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.047 14:51:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.620 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.620 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.620 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.620 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.620 14:51:20 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:56.620 14:51:20 -- setup/hugepages.sh@89 -- # local node 00:04:56.620 14:51:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.620 14:51:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.620 14:51:20 -- setup/hugepages.sh@92 -- # local surp 00:04:56.620 14:51:20 -- setup/hugepages.sh@93 -- # local resv 00:04:56.620 14:51:20 -- setup/hugepages.sh@94 -- # local anon 00:04:56.620 14:51:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.620 14:51:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.620 14:51:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.620 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.620 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.620 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.620 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.620 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.620 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.620 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.620 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.620 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6459936 kB' 'MemAvailable: 9428620 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467508 kB' 'Inactive: 2824164 kB' 'Active(anon): 128288 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119200 kB' 'Mapped: 51168 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190052 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105596 kB' 'KernelStack: 6892 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.620 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.620 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.621 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.621 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.621 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.621 14:51:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.621 14:51:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.621 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.621 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.621 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.621 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.621 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.621 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.621 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.621 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.621 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.621 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6460396 kB' 'MemAvailable: 9429080 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467144 kB' 'Inactive: 2824164 kB' 'Active(anon): 127924 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119000 kB' 'Mapped: 51068 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190028 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105572 kB' 'KernelStack: 6812 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.622 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.622 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.623 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.623 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.623 14:51:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.623 14:51:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.623 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.623 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.623 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.623 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.623 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.623 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.623 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.623 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.623 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6460396 kB' 'MemAvailable: 9429080 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466948 kB' 'Inactive: 2824164 kB' 'Active(anon): 127728 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118804 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190108 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105652 kB' 'KernelStack: 6800 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.623 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.623 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.624 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.624 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.624 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.624 nr_hugepages=1024 00:04:56.624 resv_hugepages=0 00:04:56.624 surplus_hugepages=0 00:04:56.624 anon_hugepages=0 00:04:56.624 14:51:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.624 14:51:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.624 14:51:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.624 14:51:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.624 14:51:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.624 14:51:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.624 14:51:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.624 14:51:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.624 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.624 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.624 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.624 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.624 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.624 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.624 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.624 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.624 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.624 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6460396 kB' 'MemAvailable: 9429080 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466896 kB' 'Inactive: 2824164 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118792 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190108 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105652 kB' 'KernelStack: 6800 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.625 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.625 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.626 14:51:20 -- setup/common.sh@33 -- # echo 1024 00:04:56.626 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.626 14:51:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.626 14:51:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.626 14:51:20 -- setup/hugepages.sh@27 -- # local node 00:04:56.626 14:51:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.626 14:51:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.626 14:51:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.626 14:51:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.626 14:51:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.626 14:51:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.626 14:51:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.626 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.626 14:51:20 -- setup/common.sh@18 -- # local node=0 00:04:56.626 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.626 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.626 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.626 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.626 14:51:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.626 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.626 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6460396 kB' 'MemUsed: 5776700 kB' 'SwapCached: 0 kB' 'Active: 466944 kB' 'Inactive: 2824164 kB' 'Active(anon): 127724 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 3173880 kB' 'Mapped: 50960 kB' 'AnonPages: 118808 kB' 'Shmem: 10496 kB' 'KernelStack: 6800 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190100 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.626 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.626 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.627 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.627 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.627 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.627 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.627 14:51:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.627 14:51:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.627 14:51:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.627 14:51:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.627 node0=1024 expecting 1024 00:04:56.627 14:51:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.627 ************************************ 00:04:56.627 END TEST even_2G_alloc 00:04:56.627 ************************************ 00:04:56.627 00:04:56.627 real 0m0.558s 00:04:56.627 user 0m0.239s 00:04:56.627 sys 0m0.333s 00:04:56.627 14:51:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.627 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.627 14:51:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:56.627 14:51:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.627 14:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.627 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.627 ************************************ 00:04:56.627 START TEST odd_alloc 00:04:56.627 ************************************ 00:04:56.627 14:51:20 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:56.627 14:51:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:56.627 14:51:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:56.627 14:51:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:56.627 14:51:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.627 14:51:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.627 14:51:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.627 14:51:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:56.627 14:51:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.627 14:51:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.627 14:51:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.627 14:51:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:56.627 14:51:20 -- setup/hugepages.sh@83 -- # : 0 00:04:56.627 14:51:20 -- setup/hugepages.sh@84 -- # : 0 00:04:56.627 14:51:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.627 14:51:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:56.627 14:51:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:56.627 14:51:20 -- setup/hugepages.sh@160 -- # setup output 00:04:56.627 14:51:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.627 14:51:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.204 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.204 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.204 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.204 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.204 14:51:20 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:57.204 14:51:20 -- setup/hugepages.sh@89 -- # local node 00:04:57.204 14:51:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.204 14:51:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.204 14:51:20 -- setup/hugepages.sh@92 -- # local surp 00:04:57.204 14:51:20 -- setup/hugepages.sh@93 -- # local resv 00:04:57.204 14:51:20 -- setup/hugepages.sh@94 -- # local anon 00:04:57.204 14:51:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.204 14:51:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.204 14:51:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.204 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.204 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.204 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.204 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.204 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.204 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.204 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.204 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6463680 kB' 'MemAvailable: 9432364 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467060 kB' 'Inactive: 2824164 kB' 'Active(anon): 127840 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118880 kB' 'Mapped: 50984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190184 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105728 kB' 'KernelStack: 6784 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.204 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.204 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.205 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.205 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.205 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.205 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.205 14:51:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.205 14:51:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.205 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.205 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.205 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.205 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.205 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.205 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.205 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.205 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.205 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6463680 kB' 'MemAvailable: 9432364 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467032 kB' 'Inactive: 2824164 kB' 'Active(anon): 127812 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 50984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190184 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105728 kB' 'KernelStack: 6768 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.206 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.206 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.207 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.207 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.207 14:51:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.207 14:51:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.207 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.207 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.207 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.207 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.207 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.207 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.207 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.207 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.207 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6463680 kB' 'MemAvailable: 9432364 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466996 kB' 'Inactive: 2824164 kB' 'Active(anon): 127776 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118820 kB' 'Mapped: 50984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190184 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105728 kB' 'KernelStack: 6752 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.207 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.207 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.208 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.208 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.209 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.209 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.209 nr_hugepages=1025 00:04:57.209 resv_hugepages=0 00:04:57.209 surplus_hugepages=0 00:04:57.209 anon_hugepages=0 00:04:57.209 14:51:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.209 14:51:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:57.209 14:51:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.209 14:51:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.209 14:51:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.209 14:51:20 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.209 14:51:20 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:57.209 14:51:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.209 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.209 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.209 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.209 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.209 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.209 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.209 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.209 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.209 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6463428 kB' 'MemAvailable: 9432112 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 466732 kB' 'Inactive: 2824164 kB' 'Active(anon): 127512 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190188 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105732 kB' 'KernelStack: 6816 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.209 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.209 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.210 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.210 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.211 14:51:20 -- setup/common.sh@33 -- # echo 1025 00:04:57.211 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.211 14:51:20 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.211 14:51:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.211 14:51:20 -- setup/hugepages.sh@27 -- # local node 00:04:57.211 14:51:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.211 14:51:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:57.211 14:51:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.211 14:51:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.211 14:51:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.211 14:51:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.211 14:51:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.211 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.211 14:51:20 -- setup/common.sh@18 -- # local node=0 00:04:57.211 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.211 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.211 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.211 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.211 14:51:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.211 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.211 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6463428 kB' 'MemUsed: 5773668 kB' 'SwapCached: 0 kB' 'Active: 467112 kB' 'Inactive: 2824164 kB' 'Active(anon): 127892 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 3173880 kB' 'Mapped: 50960 kB' 'AnonPages: 119028 kB' 'Shmem: 10496 kB' 'KernelStack: 6832 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190188 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.211 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.211 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.212 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.212 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.212 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.212 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.212 14:51:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.212 14:51:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.212 14:51:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.212 14:51:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.212 14:51:20 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:57.212 node0=1025 expecting 1025 00:04:57.212 14:51:20 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:57.212 00:04:57.212 real 0m0.574s 00:04:57.212 user 0m0.244s 00:04:57.212 sys 0m0.344s 00:04:57.212 14:51:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.212 ************************************ 00:04:57.212 END TEST odd_alloc 00:04:57.212 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.212 ************************************ 00:04:57.474 14:51:20 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:57.474 14:51:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.474 14:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.474 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.474 ************************************ 00:04:57.474 START TEST custom_alloc 00:04:57.474 ************************************ 00:04:57.474 14:51:20 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:57.474 14:51:20 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:57.474 14:51:20 -- setup/hugepages.sh@169 -- # local node 00:04:57.474 14:51:20 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:57.474 14:51:20 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:57.474 14:51:20 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:57.474 14:51:20 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:57.474 14:51:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:57.474 14:51:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.474 14:51:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.474 14:51:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.474 14:51:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.474 14:51:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.474 14:51:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.474 14:51:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@83 -- # : 0 00:04:57.474 14:51:20 -- setup/hugepages.sh@84 -- # : 0 00:04:57.474 14:51:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:57.474 14:51:20 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:57.474 14:51:20 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:57.474 14:51:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.474 14:51:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.474 14:51:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:57.474 14:51:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.474 14:51:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.474 14:51:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:57.474 14:51:20 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.474 14:51:20 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:57.474 14:51:20 -- setup/hugepages.sh@78 -- # return 0 00:04:57.474 14:51:20 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:57.474 14:51:20 -- setup/hugepages.sh@187 -- # setup output 00:04:57.474 14:51:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.474 14:51:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.739 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.739 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.739 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.739 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.739 14:51:21 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:57.739 14:51:21 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:57.739 14:51:21 -- setup/hugepages.sh@89 -- # local node 00:04:57.739 14:51:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.739 14:51:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.739 14:51:21 -- setup/hugepages.sh@92 -- # local surp 00:04:57.739 14:51:21 -- setup/hugepages.sh@93 -- # local resv 00:04:57.739 14:51:21 -- setup/hugepages.sh@94 -- # local anon 00:04:57.739 14:51:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.739 14:51:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.739 14:51:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.739 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:57.739 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:57.739 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.739 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.739 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.739 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.739 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.739 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7517140 kB' 'MemAvailable: 10485824 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467604 kB' 'Inactive: 2824164 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 51204 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190372 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105916 kB' 'KernelStack: 6828 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.739 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.739 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.740 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:57.740 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:57.740 14:51:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.740 14:51:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.740 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.740 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:57.740 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:57.740 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.740 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.740 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.740 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.740 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.740 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7516892 kB' 'MemAvailable: 10485576 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467084 kB' 'Inactive: 2824164 kB' 'Active(anon): 127864 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119200 kB' 'Mapped: 51092 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190424 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105968 kB' 'KernelStack: 6800 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.740 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.740 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.741 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.741 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.741 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:57.741 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:57.741 14:51:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.741 14:51:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.741 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.741 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:57.741 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:57.742 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.742 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.742 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.742 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.742 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.742 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.742 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7517152 kB' 'MemAvailable: 10485836 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467184 kB' 'Inactive: 2824164 kB' 'Active(anon): 127964 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 51092 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190424 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105968 kB' 'KernelStack: 6768 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.742 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.742 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # continue 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.743 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.743 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.743 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:57.743 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.005 14:51:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.005 14:51:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:58.005 nr_hugepages=512 00:04:58.005 resv_hugepages=0 00:04:58.005 14:51:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.005 surplus_hugepages=0 00:04:58.005 14:51:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.005 anon_hugepages=0 00:04:58.005 14:51:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.005 14:51:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.005 14:51:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:58.005 14:51:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.005 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.005 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:58.005 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.005 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.005 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.005 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.005 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.005 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.005 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7517152 kB' 'MemAvailable: 10485836 kB' 'Buffers: 2684 kB' 'Cached: 3171196 kB' 'SwapCached: 0 kB' 'Active: 467088 kB' 'Inactive: 2824164 kB' 'Active(anon): 127868 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 118908 kB' 'Mapped: 51088 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190396 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105940 kB' 'KernelStack: 6768 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 317840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.005 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.005 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.006 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.006 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.006 14:51:21 -- setup/common.sh@33 -- # echo 512 00:04:58.006 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.006 14:51:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:58.006 14:51:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.006 14:51:21 -- setup/hugepages.sh@27 -- # local node 00:04:58.007 14:51:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.007 14:51:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.007 14:51:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.007 14:51:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.007 14:51:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.007 14:51:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.007 14:51:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.007 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.007 14:51:21 -- setup/common.sh@18 -- # local node=0 00:04:58.007 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.007 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.007 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.007 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.007 14:51:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.007 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.007 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7517152 kB' 'MemUsed: 4719944 kB' 'SwapCached: 0 kB' 'Active: 467256 kB' 'Inactive: 2824164 kB' 'Active(anon): 128036 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'FilePages: 3173880 kB' 'Mapped: 51088 kB' 'AnonPages: 119100 kB' 'Shmem: 10496 kB' 'KernelStack: 6820 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190392 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.007 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.007 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.008 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.008 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.008 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:58.008 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.008 14:51:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.008 14:51:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.008 14:51:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.008 14:51:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.008 node0=512 expecting 512 00:04:58.008 14:51:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.008 14:51:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:58.008 00:04:58.008 real 0m0.569s 00:04:58.008 user 0m0.237s 00:04:58.008 sys 0m0.365s 00:04:58.008 14:51:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.008 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 ************************************ 00:04:58.008 END TEST custom_alloc 00:04:58.008 ************************************ 00:04:58.008 14:51:21 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:58.008 14:51:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.008 14:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.008 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 ************************************ 00:04:58.008 START TEST no_shrink_alloc 00:04:58.008 ************************************ 00:04:58.008 14:51:21 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:58.008 14:51:21 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:58.008 14:51:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.008 14:51:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.008 14:51:21 -- setup/hugepages.sh@51 -- # shift 00:04:58.008 14:51:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.008 14:51:21 -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.008 14:51:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.008 14:51:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.008 14:51:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.008 14:51:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.008 14:51:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.008 14:51:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.008 14:51:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.008 14:51:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.008 14:51:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.008 14:51:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.008 14:51:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.008 14:51:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.008 14:51:21 -- setup/hugepages.sh@73 -- # return 0 00:04:58.008 14:51:21 -- setup/hugepages.sh@198 -- # setup output 00:04:58.008 14:51:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.008 14:51:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.269 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.269 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.269 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.269 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.532 14:51:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:58.532 14:51:21 -- setup/hugepages.sh@89 -- # local node 00:04:58.532 14:51:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.532 14:51:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.532 14:51:21 -- setup/hugepages.sh@92 -- # local surp 00:04:58.532 14:51:21 -- setup/hugepages.sh@93 -- # local resv 00:04:58.532 14:51:21 -- setup/hugepages.sh@94 -- # local anon 00:04:58.532 14:51:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.532 14:51:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.532 14:51:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.532 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:58.532 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.532 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.532 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.532 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.532 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.532 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.532 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.532 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.532 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6482828 kB' 'MemAvailable: 9451516 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 467428 kB' 'Inactive: 2824168 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 51184 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190176 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105720 kB' 'KernelStack: 6816 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55896 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.533 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.533 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.534 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:58.534 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.534 14:51:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.534 14:51:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.534 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.534 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:58.534 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.534 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.534 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.534 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.534 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.534 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.534 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6483080 kB' 'MemAvailable: 9451768 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 467228 kB' 'Inactive: 2824168 kB' 'Active(anon): 128008 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119396 kB' 'Mapped: 51184 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190176 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105720 kB' 'KernelStack: 6784 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.534 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.534 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.535 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.535 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:58.535 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.535 14:51:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.535 14:51:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.535 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.535 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:58.535 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.535 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.535 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.535 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.535 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.535 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.535 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.535 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6483332 kB' 'MemAvailable: 9452020 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 466804 kB' 'Inactive: 2824168 kB' 'Active(anon): 127584 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118884 kB' 'Mapped: 51012 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190192 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105736 kB' 'KernelStack: 6800 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.536 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.536 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.537 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:58.537 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.537 14:51:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.537 14:51:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.537 nr_hugepages=1024 00:04:58.537 resv_hugepages=0 00:04:58.537 14:51:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.537 surplus_hugepages=0 00:04:58.537 14:51:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.537 anon_hugepages=0 00:04:58.537 14:51:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.537 14:51:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.537 14:51:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.537 14:51:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.537 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.537 14:51:21 -- setup/common.sh@18 -- # local node= 00:04:58.537 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.537 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.537 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.537 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.537 14:51:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.537 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.537 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6483332 kB' 'MemAvailable: 9452020 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 466780 kB' 'Inactive: 2824168 kB' 'Active(anon): 127560 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118892 kB' 'Mapped: 51012 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190184 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105728 kB' 'KernelStack: 6800 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.537 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.537 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.538 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.538 14:51:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.539 14:51:21 -- setup/common.sh@33 -- # echo 1024 00:04:58.539 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.539 14:51:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.539 14:51:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.539 14:51:21 -- setup/hugepages.sh@27 -- # local node 00:04:58.539 14:51:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.539 14:51:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.539 14:51:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.539 14:51:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.539 14:51:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.539 14:51:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.539 14:51:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.539 14:51:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.539 14:51:21 -- setup/common.sh@18 -- # local node=0 00:04:58.539 14:51:21 -- setup/common.sh@19 -- # local var val 00:04:58.539 14:51:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.539 14:51:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.539 14:51:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.539 14:51:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.539 14:51:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.539 14:51:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.539 14:51:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6483332 kB' 'MemUsed: 5753764 kB' 'SwapCached: 0 kB' 'Active: 466720 kB' 'Inactive: 2824168 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3173884 kB' 'Mapped: 51012 kB' 'AnonPages: 118800 kB' 'Shmem: 10496 kB' 'KernelStack: 6768 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190184 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.539 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.539 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # continue 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.540 14:51:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.540 14:51:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.540 14:51:21 -- setup/common.sh@33 -- # echo 0 00:04:58.540 14:51:21 -- setup/common.sh@33 -- # return 0 00:04:58.540 14:51:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.540 14:51:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.540 14:51:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.540 node0=1024 expecting 1024 00:04:58.540 14:51:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.540 14:51:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.540 14:51:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.540 14:51:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:58.540 14:51:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:58.540 14:51:21 -- setup/hugepages.sh@202 -- # setup output 00:04:58.540 14:51:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.540 14:51:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.801 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.801 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.801 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:58.801 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.063 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:59.063 14:51:22 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:59.063 14:51:22 -- setup/hugepages.sh@89 -- # local node 00:04:59.063 14:51:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.063 14:51:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.063 14:51:22 -- setup/hugepages.sh@92 -- # local surp 00:04:59.063 14:51:22 -- setup/hugepages.sh@93 -- # local resv 00:04:59.063 14:51:22 -- setup/hugepages.sh@94 -- # local anon 00:04:59.063 14:51:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.063 14:51:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.063 14:51:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.063 14:51:22 -- setup/common.sh@18 -- # local node= 00:04:59.063 14:51:22 -- setup/common.sh@19 -- # local var val 00:04:59.063 14:51:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.063 14:51:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.063 14:51:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.063 14:51:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.063 14:51:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.063 14:51:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.063 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6478452 kB' 'MemAvailable: 9447140 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 468016 kB' 'Inactive: 2824168 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 51312 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190148 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105692 kB' 'KernelStack: 6940 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.064 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.064 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.065 14:51:22 -- setup/common.sh@33 -- # echo 0 00:04:59.065 14:51:22 -- setup/common.sh@33 -- # return 0 00:04:59.065 14:51:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.065 14:51:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.065 14:51:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.065 14:51:22 -- setup/common.sh@18 -- # local node= 00:04:59.065 14:51:22 -- setup/common.sh@19 -- # local var val 00:04:59.065 14:51:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.065 14:51:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.065 14:51:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.065 14:51:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.065 14:51:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.065 14:51:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6478704 kB' 'MemAvailable: 9447392 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 2824168 kB' 'Active(anon): 128364 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119224 kB' 'Mapped: 51168 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190116 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105660 kB' 'KernelStack: 6812 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55864 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.065 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.065 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.066 14:51:22 -- setup/common.sh@33 -- # echo 0 00:04:59.066 14:51:22 -- setup/common.sh@33 -- # return 0 00:04:59.066 14:51:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.066 14:51:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.066 14:51:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.066 14:51:22 -- setup/common.sh@18 -- # local node= 00:04:59.066 14:51:22 -- setup/common.sh@19 -- # local var val 00:04:59.066 14:51:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.066 14:51:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.066 14:51:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.066 14:51:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.066 14:51:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.066 14:51:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6479548 kB' 'MemAvailable: 9448236 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 466848 kB' 'Inactive: 2824168 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118928 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190112 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105656 kB' 'KernelStack: 6752 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.066 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.066 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.067 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.067 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.067 14:51:22 -- setup/common.sh@33 -- # echo 0 00:04:59.067 14:51:22 -- setup/common.sh@33 -- # return 0 00:04:59.067 14:51:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.067 14:51:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.067 nr_hugepages=1024 00:04:59.067 resv_hugepages=0 00:04:59.067 14:51:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.067 surplus_hugepages=0 00:04:59.068 14:51:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.068 anon_hugepages=0 00:04:59.068 14:51:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.068 14:51:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.068 14:51:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.068 14:51:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.068 14:51:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.068 14:51:22 -- setup/common.sh@18 -- # local node= 00:04:59.068 14:51:22 -- setup/common.sh@19 -- # local var val 00:04:59.068 14:51:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.068 14:51:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.068 14:51:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.068 14:51:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.068 14:51:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.068 14:51:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6479296 kB' 'MemAvailable: 9447984 kB' 'Buffers: 2684 kB' 'Cached: 3171200 kB' 'SwapCached: 0 kB' 'Active: 466760 kB' 'Inactive: 2824168 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118624 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84456 kB' 'Slab: 190128 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105672 kB' 'KernelStack: 6784 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 318040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 204652 kB' 'DirectMap2M: 6086656 kB' 'DirectMap1G: 8388608 kB' 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.068 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.068 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.069 14:51:22 -- setup/common.sh@33 -- # echo 1024 00:04:59.069 14:51:22 -- setup/common.sh@33 -- # return 0 00:04:59.069 14:51:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.069 14:51:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.069 14:51:22 -- setup/hugepages.sh@27 -- # local node 00:04:59.069 14:51:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.069 14:51:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.069 14:51:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.069 14:51:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.069 14:51:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.069 14:51:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.069 14:51:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.069 14:51:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.069 14:51:22 -- setup/common.sh@18 -- # local node=0 00:04:59.069 14:51:22 -- setup/common.sh@19 -- # local var val 00:04:59.069 14:51:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.069 14:51:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.069 14:51:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.069 14:51:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.069 14:51:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.069 14:51:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 6479296 kB' 'MemUsed: 5757800 kB' 'SwapCached: 0 kB' 'Active: 466720 kB' 'Inactive: 2824168 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2824168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3173884 kB' 'Mapped: 50960 kB' 'AnonPages: 118584 kB' 'Shmem: 10496 kB' 'KernelStack: 6768 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84456 kB' 'Slab: 190128 kB' 'SReclaimable: 84456 kB' 'SUnreclaim: 105672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.069 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.069 14:51:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # continue 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.070 14:51:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.070 14:51:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.070 14:51:22 -- setup/common.sh@33 -- # echo 0 00:04:59.070 14:51:22 -- setup/common.sh@33 -- # return 0 00:04:59.070 14:51:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.070 14:51:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.070 14:51:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.070 14:51:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.070 node0=1024 expecting 1024 00:04:59.070 14:51:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.070 14:51:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.070 00:04:59.070 real 0m1.069s 00:04:59.070 user 0m0.461s 00:04:59.070 sys 0m0.672s 00:04:59.070 14:51:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.071 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 ************************************ 00:04:59.071 END TEST no_shrink_alloc 00:04:59.071 ************************************ 00:04:59.071 14:51:22 -- setup/hugepages.sh@217 -- # clear_hp 00:04:59.071 14:51:22 -- setup/hugepages.sh@37 -- # local node hp 00:04:59.071 14:51:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.071 14:51:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.071 14:51:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:59.071 14:51:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.071 14:51:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:59.071 14:51:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.071 14:51:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.071 00:04:59.071 real 0m4.972s 00:04:59.071 user 0m2.098s 00:04:59.071 sys 0m2.897s 00:04:59.071 14:51:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.071 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 ************************************ 00:04:59.071 END TEST hugepages 00:04:59.071 ************************************ 00:04:59.071 14:51:22 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:59.071 14:51:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.071 14:51:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.071 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:04:59.071 ************************************ 00:04:59.071 START TEST driver 00:04:59.071 ************************************ 00:04:59.071 14:51:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:59.071 * Looking for test storage... 00:04:59.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.071 14:51:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.071 14:51:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.071 14:51:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.329 14:51:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.329 14:51:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.329 14:51:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.329 14:51:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.329 14:51:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.329 14:51:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.329 14:51:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.329 14:51:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.329 14:51:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.329 14:51:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.329 14:51:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.330 14:51:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.330 14:51:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.330 14:51:22 -- scripts/common.sh@344 -- # : 1 00:04:59.330 14:51:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.330 14:51:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.330 14:51:22 -- scripts/common.sh@364 -- # decimal 1 00:04:59.330 14:51:22 -- scripts/common.sh@352 -- # local d=1 00:04:59.330 14:51:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.330 14:51:22 -- scripts/common.sh@354 -- # echo 1 00:04:59.330 14:51:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.330 14:51:22 -- scripts/common.sh@365 -- # decimal 2 00:04:59.330 14:51:22 -- scripts/common.sh@352 -- # local d=2 00:04:59.330 14:51:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.330 14:51:22 -- scripts/common.sh@354 -- # echo 2 00:04:59.330 14:51:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.330 14:51:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.330 14:51:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.330 14:51:22 -- scripts/common.sh@367 -- # return 0 00:04:59.330 14:51:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.330 14:51:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.330 --rc genhtml_branch_coverage=1 00:04:59.330 --rc genhtml_function_coverage=1 00:04:59.330 --rc genhtml_legend=1 00:04:59.330 --rc geninfo_all_blocks=1 00:04:59.330 --rc geninfo_unexecuted_blocks=1 00:04:59.330 00:04:59.330 ' 00:04:59.330 14:51:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.330 --rc genhtml_branch_coverage=1 00:04:59.330 --rc genhtml_function_coverage=1 00:04:59.330 --rc genhtml_legend=1 00:04:59.330 --rc geninfo_all_blocks=1 00:04:59.330 --rc geninfo_unexecuted_blocks=1 00:04:59.330 00:04:59.330 ' 00:04:59.330 14:51:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.330 --rc genhtml_branch_coverage=1 00:04:59.330 --rc genhtml_function_coverage=1 00:04:59.330 --rc genhtml_legend=1 00:04:59.330 --rc geninfo_all_blocks=1 00:04:59.330 --rc geninfo_unexecuted_blocks=1 00:04:59.330 00:04:59.330 ' 00:04:59.330 14:51:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.330 --rc genhtml_branch_coverage=1 00:04:59.330 --rc genhtml_function_coverage=1 00:04:59.330 --rc genhtml_legend=1 00:04:59.330 --rc geninfo_all_blocks=1 00:04:59.330 --rc geninfo_unexecuted_blocks=1 00:04:59.330 00:04:59.330 ' 00:04:59.330 14:51:22 -- setup/driver.sh@68 -- # setup reset 00:04:59.330 14:51:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.330 14:51:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.898 14:51:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.898 14:51:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.898 14:51:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.898 14:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:05.898 ************************************ 00:05:05.898 START TEST guess_driver 00:05:05.898 ************************************ 00:05:05.898 14:51:28 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:05.898 14:51:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.898 14:51:28 -- setup/driver.sh@47 -- # local fail=0 00:05:05.898 14:51:28 -- setup/driver.sh@49 -- # pick_driver 00:05:05.898 14:51:28 -- setup/driver.sh@36 -- # vfio 00:05:05.898 14:51:28 -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.898 14:51:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.898 14:51:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.898 14:51:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.898 14:51:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:05.898 14:51:28 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:05.898 14:51:28 -- setup/driver.sh@32 -- # return 1 00:05:05.898 14:51:28 -- setup/driver.sh@38 -- # uio 00:05:05.898 14:51:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:05.898 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:05.898 14:51:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.898 Looking for driver=uio_pci_generic 00:05:05.898 14:51:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:05.898 14:51:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:28 -- setup/driver.sh@45 -- # setup output config 00:05:05.898 14:51:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.898 14:51:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # continue 00:05:05.898 14:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.898 14:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.898 14:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.898 14:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.898 14:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.898 14:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.898 14:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.898 14:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:05.898 14:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.898 14:51:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.898 14:51:29 -- setup/driver.sh@65 -- # setup reset 00:05:05.898 14:51:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.898 14:51:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.459 00:05:12.459 real 0m6.778s 00:05:12.459 user 0m0.642s 00:05:12.459 sys 0m1.205s 00:05:12.459 14:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.459 ************************************ 00:05:12.459 END TEST guess_driver 00:05:12.459 ************************************ 00:05:12.459 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:12.459 ************************************ 00:05:12.459 END TEST driver 00:05:12.459 ************************************ 00:05:12.459 00:05:12.459 real 0m12.689s 00:05:12.459 user 0m0.985s 00:05:12.459 sys 0m1.892s 00:05:12.459 14:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.459 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:12.459 14:51:35 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.459 14:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.459 14:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.459 14:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:12.459 ************************************ 00:05:12.459 START TEST devices 00:05:12.459 ************************************ 00:05:12.459 14:51:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:12.459 * Looking for test storage... 00:05:12.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.459 14:51:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.459 14:51:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.459 14:51:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.459 14:51:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.459 14:51:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.459 14:51:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.459 14:51:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.459 14:51:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.459 14:51:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.459 14:51:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.459 14:51:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.459 14:51:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.459 14:51:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.459 14:51:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.459 14:51:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.459 14:51:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.459 14:51:35 -- scripts/common.sh@344 -- # : 1 00:05:12.459 14:51:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.459 14:51:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.459 14:51:35 -- scripts/common.sh@364 -- # decimal 1 00:05:12.459 14:51:35 -- scripts/common.sh@352 -- # local d=1 00:05:12.459 14:51:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.459 14:51:35 -- scripts/common.sh@354 -- # echo 1 00:05:12.459 14:51:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.459 14:51:35 -- scripts/common.sh@365 -- # decimal 2 00:05:12.459 14:51:35 -- scripts/common.sh@352 -- # local d=2 00:05:12.459 14:51:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.459 14:51:35 -- scripts/common.sh@354 -- # echo 2 00:05:12.459 14:51:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.459 14:51:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.459 14:51:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.459 14:51:35 -- scripts/common.sh@367 -- # return 0 00:05:12.459 14:51:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.459 14:51:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.459 --rc genhtml_branch_coverage=1 00:05:12.459 --rc genhtml_function_coverage=1 00:05:12.460 --rc genhtml_legend=1 00:05:12.460 --rc geninfo_all_blocks=1 00:05:12.460 --rc geninfo_unexecuted_blocks=1 00:05:12.460 00:05:12.460 ' 00:05:12.460 14:51:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.460 --rc genhtml_branch_coverage=1 00:05:12.460 --rc genhtml_function_coverage=1 00:05:12.460 --rc genhtml_legend=1 00:05:12.460 --rc geninfo_all_blocks=1 00:05:12.460 --rc geninfo_unexecuted_blocks=1 00:05:12.460 00:05:12.460 ' 00:05:12.460 14:51:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.460 --rc genhtml_branch_coverage=1 00:05:12.460 --rc genhtml_function_coverage=1 00:05:12.460 --rc genhtml_legend=1 00:05:12.460 --rc geninfo_all_blocks=1 00:05:12.460 --rc geninfo_unexecuted_blocks=1 00:05:12.460 00:05:12.460 ' 00:05:12.460 14:51:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.460 --rc genhtml_branch_coverage=1 00:05:12.460 --rc genhtml_function_coverage=1 00:05:12.460 --rc genhtml_legend=1 00:05:12.460 --rc geninfo_all_blocks=1 00:05:12.460 --rc geninfo_unexecuted_blocks=1 00:05:12.460 00:05:12.460 ' 00:05:12.460 14:51:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:12.460 14:51:35 -- setup/devices.sh@192 -- # setup reset 00:05:12.460 14:51:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.460 14:51:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.717 14:51:36 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:12.717 14:51:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:12.718 14:51:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:12.718 14:51:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:12.718 14:51:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:05:12.718 14:51:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:12.718 14:51:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:12.718 14:51:36 -- setup/devices.sh@196 -- # blocks=() 00:05:12.718 14:51:36 -- setup/devices.sh@196 -- # declare -a blocks 00:05:12.718 14:51:36 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:12.718 14:51:36 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:12.976 14:51:36 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:12.976 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:12.976 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:05:12.976 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:12.976 14:51:36 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:12.976 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:12.976 No valid GPT data, bailing 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:12.976 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:12.976 14:51:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:12.976 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:12.976 14:51:36 -- setup/common.sh@80 -- # echo 1073741824 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:12.976 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.976 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.976 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:12.976 14:51:36 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:12.976 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:12.976 No valid GPT data, bailing 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:12.976 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:12.976 14:51:36 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:12.976 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:12.976 14:51:36 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.976 14:51:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.976 14:51:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.976 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.976 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.976 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:12.976 14:51:36 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:12.976 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:12.976 No valid GPT data, bailing 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:12.976 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:12.976 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:12.976 14:51:36 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:12.976 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:12.976 14:51:36 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.976 14:51:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.976 14:51:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.976 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:12.976 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:12.976 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:12.976 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:12.976 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:12.976 14:51:36 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:12.976 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:12.977 No valid GPT data, bailing 00:05:12.977 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:12.977 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:12.977 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:12.977 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:12.977 14:51:36 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:12.977 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:12.977 14:51:36 -- setup/common.sh@80 -- # echo 4294967296 00:05:12.977 14:51:36 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:12.977 14:51:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.977 14:51:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:12.977 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.977 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:12.977 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:12.977 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:12.977 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:12.977 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:12.977 14:51:36 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:05:12.977 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:13.235 No valid GPT data, bailing 00:05:13.235 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:13.235 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:13.235 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:13.235 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:13.235 14:51:36 -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:13.235 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:13.235 14:51:36 -- setup/common.sh@80 -- # echo 6343335936 00:05:13.235 14:51:36 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:13.235 14:51:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.235 14:51:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:13.235 14:51:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:13.235 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:13.235 14:51:36 -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:13.235 14:51:36 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:13.235 14:51:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.235 14:51:36 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:13.235 14:51:36 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:05:13.235 14:51:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:13.235 No valid GPT data, bailing 00:05:13.235 14:51:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:13.235 14:51:36 -- scripts/common.sh@393 -- # pt= 00:05:13.235 14:51:36 -- scripts/common.sh@394 -- # return 1 00:05:13.235 14:51:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:13.235 14:51:36 -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:13.235 14:51:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:13.235 14:51:36 -- setup/common.sh@80 -- # echo 5368709120 00:05:13.235 14:51:36 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:13.235 14:51:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:13.235 14:51:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:13.235 14:51:36 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:13.235 14:51:36 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:05:13.235 14:51:36 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:13.235 14:51:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.235 14:51:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.235 14:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.235 ************************************ 00:05:13.235 START TEST nvme_mount 00:05:13.235 ************************************ 00:05:13.235 14:51:36 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:13.235 14:51:36 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:05:13.235 14:51:36 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:05:13.235 14:51:36 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.235 14:51:36 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.235 14:51:36 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:05:13.235 14:51:36 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:13.235 14:51:36 -- setup/common.sh@40 -- # local part_no=1 00:05:13.235 14:51:36 -- setup/common.sh@41 -- # local size=1073741824 00:05:13.235 14:51:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.235 14:51:36 -- setup/common.sh@44 -- # parts=() 00:05:13.235 14:51:36 -- setup/common.sh@44 -- # local parts 00:05:13.235 14:51:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.235 14:51:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.235 14:51:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.235 14:51:36 -- setup/common.sh@46 -- # (( part++ )) 00:05:13.235 14:51:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.235 14:51:36 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:13.236 14:51:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:13.236 14:51:36 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:05:14.170 Creating new GPT entries in memory. 00:05:14.170 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.170 other utilities. 00:05:14.170 14:51:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.170 14:51:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.170 14:51:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.170 14:51:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.170 14:51:37 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:15.545 Creating new GPT entries in memory. 00:05:15.545 The operation has completed successfully. 00:05:15.545 14:51:38 -- setup/common.sh@57 -- # (( part++ )) 00:05:15.545 14:51:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.545 14:51:38 -- setup/common.sh@62 -- # wait 66119 00:05:15.545 14:51:38 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.545 14:51:38 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:15.545 14:51:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.545 14:51:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:15.545 14:51:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:15.545 14:51:38 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.545 14:51:38 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.545 14:51:38 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:15.545 14:51:38 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:15.545 14:51:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.545 14:51:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.545 14:51:38 -- setup/devices.sh@53 -- # local found=0 00:05:15.545 14:51:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.545 14:51:38 -- setup/devices.sh@56 -- # : 00:05:15.545 14:51:38 -- setup/devices.sh@59 -- # local pci status 00:05:15.545 14:51:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.545 14:51:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:15.545 14:51:38 -- setup/devices.sh@47 -- # setup output config 00:05:15.545 14:51:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.545 14:51:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.545 14:51:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.545 14:51:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.545 14:51:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.545 14:51:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.802 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.802 14:51:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:15.802 14:51:39 -- setup/devices.sh@63 -- # found=1 00:05:15.802 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.802 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.802 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.802 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.802 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.802 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:15.802 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.059 14:51:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.059 14:51:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.059 14:51:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.059 14:51:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.059 14:51:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.060 14:51:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:16.060 14:51:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.060 14:51:39 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.060 14:51:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:16.060 14:51:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:16.060 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.060 14:51:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:16.060 14:51:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:16.319 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.319 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.319 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.319 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:16.319 14:51:39 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:16.319 14:51:39 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:16.319 14:51:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.319 14:51:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:16.319 14:51:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:16.319 14:51:39 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.319 14:51:39 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.319 14:51:39 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:16.319 14:51:39 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:16.319 14:51:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.319 14:51:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.319 14:51:39 -- setup/devices.sh@53 -- # local found=0 00:05:16.319 14:51:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.319 14:51:39 -- setup/devices.sh@56 -- # : 00:05:16.319 14:51:39 -- setup/devices.sh@59 -- # local pci status 00:05:16.319 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.319 14:51:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:16.319 14:51:39 -- setup/devices.sh@47 -- # setup output config 00:05:16.319 14:51:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.319 14:51:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.319 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.319 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.631 14:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.631 14:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.913 14:51:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:16.913 14:51:40 -- setup/devices.sh@63 -- # found=1 00:05:16.913 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.913 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.913 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:16.913 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.913 14:51:40 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:16.913 14:51:40 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.913 14:51:40 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.913 14:51:40 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.913 14:51:40 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.913 14:51:40 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:16.913 14:51:40 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:16.913 14:51:40 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:16.913 14:51:40 -- setup/devices.sh@50 -- # local mount_point= 00:05:16.913 14:51:40 -- setup/devices.sh@51 -- # local test_file= 00:05:16.913 14:51:40 -- setup/devices.sh@53 -- # local found=0 00:05:16.913 14:51:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.913 14:51:40 -- setup/devices.sh@59 -- # local pci status 00:05:16.913 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.913 14:51:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:16.913 14:51:40 -- setup/devices.sh@47 -- # setup output config 00:05:16.913 14:51:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.913 14:51:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.173 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.173 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.173 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.173 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.432 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.432 14:51:40 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:17.432 14:51:40 -- setup/devices.sh@63 -- # found=1 00:05:17.432 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.432 14:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.432 14:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.690 14:51:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.690 14:51:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.690 14:51:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:17.690 14:51:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.690 14:51:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.690 14:51:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.690 14:51:41 -- setup/devices.sh@68 -- # return 0 00:05:17.690 14:51:41 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.690 14:51:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.690 14:51:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:17.690 14:51:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:17.690 14:51:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:17.690 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.690 ************************************ 00:05:17.690 END TEST nvme_mount 00:05:17.690 ************************************ 00:05:17.690 00:05:17.690 real 0m4.558s 00:05:17.690 user 0m0.944s 00:05:17.690 sys 0m1.296s 00:05:17.690 14:51:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.690 14:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.690 14:51:41 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.690 14:51:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.690 14:51:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.690 14:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.690 ************************************ 00:05:17.690 START TEST dm_mount 00:05:17.690 ************************************ 00:05:17.690 14:51:41 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:17.690 14:51:41 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:17.690 14:51:41 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:17.690 14:51:41 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:17.690 14:51:41 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:17.690 14:51:41 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:17.690 14:51:41 -- setup/common.sh@40 -- # local part_no=2 00:05:17.690 14:51:41 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.690 14:51:41 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.690 14:51:41 -- setup/common.sh@44 -- # parts=() 00:05:17.690 14:51:41 -- setup/common.sh@44 -- # local parts 00:05:17.690 14:51:41 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.690 14:51:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.690 14:51:41 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.690 14:51:41 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.690 14:51:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.690 14:51:41 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.690 14:51:41 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.691 14:51:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.691 14:51:41 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.691 14:51:41 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:17.691 14:51:41 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:19.076 Creating new GPT entries in memory. 00:05:19.076 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.076 other utilities. 00:05:19.076 14:51:42 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.076 14:51:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.076 14:51:42 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.076 14:51:42 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.076 14:51:42 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:20.018 Creating new GPT entries in memory. 00:05:20.018 The operation has completed successfully. 00:05:20.018 14:51:43 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.018 14:51:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.018 14:51:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.018 14:51:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.018 14:51:43 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:20.964 The operation has completed successfully. 00:05:20.964 14:51:44 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.964 14:51:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.964 14:51:44 -- setup/common.sh@62 -- # wait 66743 00:05:20.964 14:51:44 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.964 14:51:44 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.964 14:51:44 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.964 14:51:44 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.964 14:51:44 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.964 14:51:44 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.964 14:51:44 -- setup/devices.sh@161 -- # break 00:05:20.964 14:51:44 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.964 14:51:44 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.964 14:51:44 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.964 14:51:44 -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.964 14:51:44 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:20.964 14:51:44 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:20.965 14:51:44 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.965 14:51:44 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:20.965 14:51:44 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.965 14:51:44 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.965 14:51:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.965 14:51:44 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.965 14:51:44 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.965 14:51:44 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:20.965 14:51:44 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:20.965 14:51:44 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.965 14:51:44 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.965 14:51:44 -- setup/devices.sh@53 -- # local found=0 00:05:20.965 14:51:44 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.965 14:51:44 -- setup/devices.sh@56 -- # : 00:05:20.965 14:51:44 -- setup/devices.sh@59 -- # local pci status 00:05:20.965 14:51:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.965 14:51:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:20.965 14:51:44 -- setup/devices.sh@47 -- # setup output config 00:05:20.965 14:51:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.965 14:51:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.226 14:51:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.226 14:51:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.226 14:51:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.226 14:51:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.486 14:51:44 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.486 14:51:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.486 14:51:44 -- setup/devices.sh@63 -- # found=1 00:05:21.486 14:51:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.486 14:51:44 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.486 14:51:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.746 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.746 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.746 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:21.746 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.746 14:51:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.746 14:51:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.746 14:51:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.746 14:51:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.746 14:51:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.746 14:51:45 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.746 14:51:45 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:21.746 14:51:45 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:21.746 14:51:45 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:21.746 14:51:45 -- setup/devices.sh@50 -- # local mount_point= 00:05:21.746 14:51:45 -- setup/devices.sh@51 -- # local test_file= 00:05:21.746 14:51:45 -- setup/devices.sh@53 -- # local found=0 00:05:21.746 14:51:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.746 14:51:45 -- setup/devices.sh@59 -- # local pci status 00:05:21.746 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.746 14:51:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:21.746 14:51:45 -- setup/devices.sh@47 -- # setup output config 00:05:21.746 14:51:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.746 14:51:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.004 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.004 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.004 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.004 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.264 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.264 14:51:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:22.264 14:51:45 -- setup/devices.sh@63 -- # found=1 00:05:22.264 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.264 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.264 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.264 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.265 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.523 14:51:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:22.523 14:51:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.523 14:51:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.523 14:51:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.523 14:51:45 -- setup/devices.sh@68 -- # return 0 00:05:22.523 14:51:45 -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.523 14:51:45 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.523 14:51:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.523 14:51:45 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.523 14:51:46 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:22.523 14:51:46 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:22.523 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.523 14:51:46 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:22.523 14:51:46 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:22.523 00:05:22.523 real 0m4.782s 00:05:22.523 user 0m0.654s 00:05:22.523 sys 0m0.919s 00:05:22.523 14:51:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.523 ************************************ 00:05:22.523 END TEST dm_mount 00:05:22.523 ************************************ 00:05:22.523 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.523 14:51:46 -- setup/devices.sh@1 -- # cleanup 00:05:22.523 14:51:46 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.523 14:51:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.524 14:51:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:22.524 14:51:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:22.524 14:51:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:22.524 14:51:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:22.781 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.781 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:22.781 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.781 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:22.781 14:51:46 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.781 14:51:46 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.781 14:51:46 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.781 14:51:46 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:22.781 14:51:46 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:22.781 14:51:46 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:22.781 14:51:46 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:22.781 00:05:22.781 real 0m11.073s 00:05:22.781 user 0m2.338s 00:05:22.781 sys 0m2.910s 00:05:22.781 14:51:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.781 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.781 ************************************ 00:05:22.781 END TEST devices 00:05:22.781 ************************************ 00:05:23.038 00:05:23.038 real 0m39.402s 00:05:23.038 user 0m7.760s 00:05:23.038 sys 0m10.930s 00:05:23.038 14:51:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.038 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.038 ************************************ 00:05:23.038 END TEST setup.sh 00:05:23.038 ************************************ 00:05:23.038 14:51:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.038 Hugepages 00:05:23.038 node hugesize free / total 00:05:23.038 node0 1048576kB 0 / 0 00:05:23.038 node0 2048kB 2048 / 2048 00:05:23.038 00:05:23.039 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.039 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.296 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:23.296 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:23.296 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:23.296 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.296 14:51:46 -- spdk/autotest.sh@128 -- # uname -s 00:05:23.296 14:51:46 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:23.296 14:51:46 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:23.296 14:51:46 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.487 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.487 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.487 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.487 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.487 14:51:47 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:25.531 14:51:48 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:25.531 14:51:48 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:25.531 14:51:48 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.531 14:51:48 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:25.531 14:51:48 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:25.531 14:51:48 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:25.531 14:51:48 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.531 14:51:48 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.531 14:51:48 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:25.531 14:51:49 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:25.531 14:51:49 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:25.531 14:51:49 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.097 Waiting for block devices as requested 00:05:26.097 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.097 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.097 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.381 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.647 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:31.647 14:51:54 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.647 14:51:54 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.647 14:51:54 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.647 14:51:54 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.647 14:51:54 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1552 -- # continue 00:05:31.647 14:51:54 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.647 14:51:54 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:31.647 14:51:54 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.647 14:51:54 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.647 14:51:54 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.647 14:51:54 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.647 14:51:54 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.647 14:51:54 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.647 14:51:54 -- common/autotest_common.sh@1552 -- # continue 00:05:31.648 14:51:54 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.648 14:51:54 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.648 14:51:54 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.648 14:51:54 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.648 14:51:54 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1552 -- # continue 00:05:31.648 14:51:54 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.648 14:51:54 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:05:31.648 14:51:54 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.648 14:51:54 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.648 14:51:54 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.648 14:51:54 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.648 14:51:54 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.648 14:51:54 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.648 14:51:54 -- common/autotest_common.sh@1552 -- # continue 00:05:31.648 14:51:54 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:31.648 14:51:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.648 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:31.648 14:51:54 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:31.648 14:51:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.648 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:31.648 14:51:54 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.583 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.583 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.583 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.583 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.583 14:51:56 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:32.583 14:51:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.583 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:32.843 14:51:56 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:32.843 14:51:56 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:32.843 14:51:56 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:32.843 14:51:56 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:32.843 14:51:56 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:32.843 14:51:56 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:32.843 14:51:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:32.843 14:51:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:32.843 14:51:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.843 14:51:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:32.843 14:51:56 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:32.843 14:51:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.843 14:51:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.843 14:51:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.843 14:51:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:32.843 14:51:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.843 14:51:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:32.843 14:51:56 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1588 -- # return 0 00:05:32.843 14:51:56 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:32.843 14:51:56 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:32.843 14:51:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:32.843 14:51:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:32.843 14:51:56 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:32.843 14:51:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.843 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:32.843 14:51:56 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.843 14:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.843 14:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.843 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:32.843 ************************************ 00:05:32.843 START TEST env 00:05:32.843 ************************************ 00:05:32.843 14:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.843 * Looking for test storage... 00:05:32.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:32.843 14:51:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.843 14:51:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.843 14:51:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.102 14:51:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.102 14:51:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.102 14:51:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.102 14:51:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.102 14:51:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.102 14:51:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.102 14:51:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.102 14:51:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.103 14:51:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.103 14:51:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.103 14:51:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.103 14:51:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.103 14:51:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.103 14:51:56 -- scripts/common.sh@344 -- # : 1 00:05:33.103 14:51:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.103 14:51:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.103 14:51:56 -- scripts/common.sh@364 -- # decimal 1 00:05:33.103 14:51:56 -- scripts/common.sh@352 -- # local d=1 00:05:33.103 14:51:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.103 14:51:56 -- scripts/common.sh@354 -- # echo 1 00:05:33.103 14:51:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.103 14:51:56 -- scripts/common.sh@365 -- # decimal 2 00:05:33.103 14:51:56 -- scripts/common.sh@352 -- # local d=2 00:05:33.103 14:51:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.103 14:51:56 -- scripts/common.sh@354 -- # echo 2 00:05:33.103 14:51:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.103 14:51:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.103 14:51:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.103 14:51:56 -- scripts/common.sh@367 -- # return 0 00:05:33.103 14:51:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.103 14:51:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.103 --rc genhtml_branch_coverage=1 00:05:33.103 --rc genhtml_function_coverage=1 00:05:33.103 --rc genhtml_legend=1 00:05:33.103 --rc geninfo_all_blocks=1 00:05:33.103 --rc geninfo_unexecuted_blocks=1 00:05:33.103 00:05:33.103 ' 00:05:33.103 14:51:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.103 --rc genhtml_branch_coverage=1 00:05:33.103 --rc genhtml_function_coverage=1 00:05:33.103 --rc genhtml_legend=1 00:05:33.103 --rc geninfo_all_blocks=1 00:05:33.103 --rc geninfo_unexecuted_blocks=1 00:05:33.103 00:05:33.103 ' 00:05:33.103 14:51:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.103 --rc genhtml_branch_coverage=1 00:05:33.103 --rc genhtml_function_coverage=1 00:05:33.103 --rc genhtml_legend=1 00:05:33.103 --rc geninfo_all_blocks=1 00:05:33.103 --rc geninfo_unexecuted_blocks=1 00:05:33.103 00:05:33.103 ' 00:05:33.103 14:51:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.103 --rc genhtml_branch_coverage=1 00:05:33.103 --rc genhtml_function_coverage=1 00:05:33.103 --rc genhtml_legend=1 00:05:33.103 --rc geninfo_all_blocks=1 00:05:33.103 --rc geninfo_unexecuted_blocks=1 00:05:33.103 00:05:33.103 ' 00:05:33.103 14:51:56 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:33.103 14:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.103 14:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.103 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.103 ************************************ 00:05:33.103 START TEST env_memory 00:05:33.103 ************************************ 00:05:33.103 14:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:33.103 00:05:33.103 00:05:33.103 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.103 http://cunit.sourceforge.net/ 00:05:33.103 00:05:33.103 00:05:33.103 Suite: memory 00:05:33.103 Test: alloc and free memory map ...[2024-11-18 14:51:56.537669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:33.103 passed 00:05:33.103 Test: mem map translation ...[2024-11-18 14:51:56.576603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:33.103 [2024-11-18 14:51:56.576660] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:33.103 [2024-11-18 14:51:56.576722] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:33.103 [2024-11-18 14:51:56.576737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:33.103 passed 00:05:33.103 Test: mem map registration ...[2024-11-18 14:51:56.645002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:33.103 [2024-11-18 14:51:56.645039] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:33.103 passed 00:05:33.361 Test: mem map adjacent registrations ...passed 00:05:33.361 00:05:33.361 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.361 suites 1 1 n/a 0 0 00:05:33.361 tests 4 4 4 0 0 00:05:33.361 asserts 152 152 152 0 n/a 00:05:33.361 00:05:33.361 Elapsed time = 0.234 seconds 00:05:33.361 00:05:33.361 real 0m0.269s 00:05:33.361 user 0m0.239s 00:05:33.361 sys 0m0.022s 00:05:33.361 14:51:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.361 ************************************ 00:05:33.361 END TEST env_memory 00:05:33.361 ************************************ 00:05:33.361 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.361 14:51:56 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:33.361 14:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.361 14:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.361 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.361 ************************************ 00:05:33.361 START TEST env_vtophys 00:05:33.361 ************************************ 00:05:33.361 14:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:33.361 EAL: lib.eal log level changed from notice to debug 00:05:33.361 EAL: Detected lcore 0 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 1 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 2 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 3 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 4 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 5 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 6 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 7 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 8 as core 0 on socket 0 00:05:33.361 EAL: Detected lcore 9 as core 0 on socket 0 00:05:33.361 EAL: Maximum logical cores by configuration: 128 00:05:33.361 EAL: Detected CPU lcores: 10 00:05:33.361 EAL: Detected NUMA nodes: 1 00:05:33.361 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:33.361 EAL: Detected shared linkage of DPDK 00:05:33.361 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:33.361 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:33.361 EAL: Registered [vdev] bus. 00:05:33.361 EAL: bus.vdev log level changed from disabled to notice 00:05:33.361 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:33.361 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:33.361 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:33.362 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:33.362 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:33.362 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:33.362 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:33.362 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:33.362 EAL: No shared files mode enabled, IPC will be disabled 00:05:33.362 EAL: No shared files mode enabled, IPC is disabled 00:05:33.362 EAL: Selected IOVA mode 'PA' 00:05:33.362 EAL: Probing VFIO support... 00:05:33.362 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:33.362 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:33.362 EAL: Ask a virtual area of 0x2e000 bytes 00:05:33.362 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:33.362 EAL: Setting up physically contiguous memory... 00:05:33.362 EAL: Setting maximum number of open files to 524288 00:05:33.362 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:33.362 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:33.362 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.362 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:33.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.362 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.362 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:33.362 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:33.362 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.362 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:33.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.362 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.362 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:33.362 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:33.362 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.362 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:33.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.362 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.362 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:33.362 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:33.362 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.362 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:33.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.362 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.362 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:33.362 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:33.362 EAL: Hugepages will be freed exactly as allocated. 00:05:33.362 EAL: No shared files mode enabled, IPC is disabled 00:05:33.362 EAL: No shared files mode enabled, IPC is disabled 00:05:33.622 EAL: TSC frequency is ~2600000 KHz 00:05:33.622 EAL: Main lcore 0 is ready (tid=7fc2691eea40;cpuset=[0]) 00:05:33.622 EAL: Trying to obtain current memory policy. 00:05:33.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.622 EAL: Restoring previous memory policy: 0 00:05:33.622 EAL: request: mp_malloc_sync 00:05:33.622 EAL: No shared files mode enabled, IPC is disabled 00:05:33.622 EAL: Heap on socket 0 was expanded by 2MB 00:05:33.622 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:33.622 EAL: No shared files mode enabled, IPC is disabled 00:05:33.622 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:33.622 EAL: Mem event callback 'spdk:(nil)' registered 00:05:33.622 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:33.622 00:05:33.622 00:05:33.622 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.622 http://cunit.sourceforge.net/ 00:05:33.622 00:05:33.622 00:05:33.622 Suite: components_suite 00:05:33.881 Test: vtophys_malloc_test ...passed 00:05:33.881 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:33.881 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.881 EAL: Restoring previous memory policy: 4 00:05:33.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.881 EAL: request: mp_malloc_sync 00:05:33.881 EAL: No shared files mode enabled, IPC is disabled 00:05:33.881 EAL: Heap on socket 0 was expanded by 4MB 00:05:33.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.881 EAL: request: mp_malloc_sync 00:05:33.881 EAL: No shared files mode enabled, IPC is disabled 00:05:33.881 EAL: Heap on socket 0 was shrunk by 4MB 00:05:33.881 EAL: Trying to obtain current memory policy. 00:05:33.881 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.881 EAL: Restoring previous memory policy: 4 00:05:33.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.881 EAL: request: mp_malloc_sync 00:05:33.881 EAL: No shared files mode enabled, IPC is disabled 00:05:33.881 EAL: Heap on socket 0 was expanded by 6MB 00:05:33.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was shrunk by 6MB 00:05:33.882 EAL: Trying to obtain current memory policy. 00:05:33.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.882 EAL: Restoring previous memory policy: 4 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was expanded by 10MB 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was shrunk by 10MB 00:05:33.882 EAL: Trying to obtain current memory policy. 00:05:33.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.882 EAL: Restoring previous memory policy: 4 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was expanded by 18MB 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was shrunk by 18MB 00:05:33.882 EAL: Trying to obtain current memory policy. 00:05:33.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.882 EAL: Restoring previous memory policy: 4 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was expanded by 34MB 00:05:33.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.882 EAL: request: mp_malloc_sync 00:05:33.882 EAL: No shared files mode enabled, IPC is disabled 00:05:33.882 EAL: Heap on socket 0 was shrunk by 34MB 00:05:33.882 EAL: Trying to obtain current memory policy. 00:05:33.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.141 EAL: Restoring previous memory policy: 4 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.141 EAL: request: mp_malloc_sync 00:05:34.141 EAL: No shared files mode enabled, IPC is disabled 00:05:34.141 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.141 EAL: request: mp_malloc_sync 00:05:34.141 EAL: No shared files mode enabled, IPC is disabled 00:05:34.141 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.141 EAL: Trying to obtain current memory policy. 00:05:34.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.141 EAL: Restoring previous memory policy: 4 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.141 EAL: request: mp_malloc_sync 00:05:34.141 EAL: No shared files mode enabled, IPC is disabled 00:05:34.141 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.141 EAL: request: mp_malloc_sync 00:05:34.141 EAL: No shared files mode enabled, IPC is disabled 00:05:34.141 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.141 EAL: Trying to obtain current memory policy. 00:05:34.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.141 EAL: Restoring previous memory policy: 4 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.141 EAL: request: mp_malloc_sync 00:05:34.141 EAL: No shared files mode enabled, IPC is disabled 00:05:34.141 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.399 EAL: request: mp_malloc_sync 00:05:34.399 EAL: No shared files mode enabled, IPC is disabled 00:05:34.399 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.399 EAL: Trying to obtain current memory policy. 00:05:34.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.399 EAL: Restoring previous memory policy: 4 00:05:34.399 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.399 EAL: request: mp_malloc_sync 00:05:34.399 EAL: No shared files mode enabled, IPC is disabled 00:05:34.399 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.659 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.659 EAL: request: mp_malloc_sync 00:05:34.659 EAL: No shared files mode enabled, IPC is disabled 00:05:34.659 EAL: Heap on socket 0 was shrunk by 514MB 00:05:34.659 EAL: Trying to obtain current memory policy. 00:05:34.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.919 EAL: Restoring previous memory policy: 4 00:05:34.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.919 EAL: request: mp_malloc_sync 00:05:34.919 EAL: No shared files mode enabled, IPC is disabled 00:05:34.919 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 passed 00:05:35.439 00:05:35.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.439 suites 1 1 n/a 0 0 00:05:35.439 tests 2 2 2 0 0 00:05:35.439 asserts 5407 5407 5407 0 n/a 00:05:35.439 00:05:35.439 Elapsed time = 1.841 seconds 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.439 EAL: request: mp_malloc_sync 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 EAL: No shared files mode enabled, IPC is disabled 00:05:35.439 00:05:35.439 real 0m2.092s 00:05:35.439 user 0m0.950s 00:05:35.439 sys 0m0.993s 00:05:35.439 14:51:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.439 ************************************ 00:05:35.439 END TEST env_vtophys 00:05:35.439 ************************************ 00:05:35.439 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.439 14:51:58 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:35.439 14:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.439 14:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.439 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.439 ************************************ 00:05:35.439 START TEST env_pci 00:05:35.439 ************************************ 00:05:35.440 14:51:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:35.440 00:05:35.440 00:05:35.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.440 http://cunit.sourceforge.net/ 00:05:35.440 00:05:35.440 00:05:35.440 Suite: pci 00:05:35.440 Test: pci_hook ...[2024-11-18 14:51:58.998550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68419 has claimed it 00:05:35.440 passed 00:05:35.440 00:05:35.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.440 suites 1 1 n/a 0 0 00:05:35.440 tests 1 1 1 0 0 00:05:35.440 asserts 25 25 25 0 n/a 00:05:35.440 00:05:35.440 Elapsed time = 0.007 seconds 00:05:35.440 EAL: Cannot find device (10000:00:01.0) 00:05:35.440 EAL: Failed to attach device on primary process 00:05:35.701 00:05:35.701 real 0m0.065s 00:05:35.701 user 0m0.027s 00:05:35.701 sys 0m0.037s 00:05:35.701 14:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.701 ************************************ 00:05:35.701 END TEST env_pci 00:05:35.701 ************************************ 00:05:35.701 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.701 14:51:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.701 14:51:59 -- env/env.sh@15 -- # uname 00:05:35.701 14:51:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.701 14:51:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.701 14:51:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.701 14:51:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:35.701 14:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.701 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.701 ************************************ 00:05:35.701 START TEST env_dpdk_post_init 00:05:35.701 ************************************ 00:05:35.701 14:51:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.701 EAL: Detected CPU lcores: 10 00:05:35.701 EAL: Detected NUMA nodes: 1 00:05:35.701 EAL: Detected shared linkage of DPDK 00:05:35.701 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.701 EAL: Selected IOVA mode 'PA' 00:05:35.701 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.961 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:35.961 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:35.961 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:35.961 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:35.961 Starting DPDK initialization... 00:05:35.961 Starting SPDK post initialization... 00:05:35.961 SPDK NVMe probe 00:05:35.961 Attaching to 0000:00:06.0 00:05:35.961 Attaching to 0000:00:07.0 00:05:35.961 Attaching to 0000:00:08.0 00:05:35.961 Attaching to 0000:00:09.0 00:05:35.961 Attached to 0000:00:06.0 00:05:35.961 Attached to 0000:00:07.0 00:05:35.961 Attached to 0000:00:09.0 00:05:35.961 Attached to 0000:00:08.0 00:05:35.961 Cleaning up... 00:05:35.961 00:05:35.961 real 0m0.232s 00:05:35.961 user 0m0.068s 00:05:35.961 sys 0m0.066s 00:05:35.961 14:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.961 ************************************ 00:05:35.961 END TEST env_dpdk_post_init 00:05:35.961 ************************************ 00:05:35.961 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.961 14:51:59 -- env/env.sh@26 -- # uname 00:05:35.961 14:51:59 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:35.961 14:51:59 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.961 14:51:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.961 14:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.961 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:35.961 ************************************ 00:05:35.961 START TEST env_mem_callbacks 00:05:35.961 ************************************ 00:05:35.961 14:51:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.961 EAL: Detected CPU lcores: 10 00:05:35.961 EAL: Detected NUMA nodes: 1 00:05:35.961 EAL: Detected shared linkage of DPDK 00:05:35.961 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.961 EAL: Selected IOVA mode 'PA' 00:05:35.961 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.961 00:05:35.961 00:05:35.961 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.961 http://cunit.sourceforge.net/ 00:05:35.961 00:05:35.961 00:05:35.961 Suite: memory 00:05:35.961 Test: test ... 00:05:35.961 register 0x200000200000 2097152 00:05:35.961 malloc 3145728 00:05:35.961 register 0x200000400000 4194304 00:05:35.961 buf 0x200000500000 len 3145728 PASSED 00:05:35.961 malloc 64 00:05:35.961 buf 0x2000004fff40 len 64 PASSED 00:05:35.961 malloc 4194304 00:05:35.961 register 0x200000800000 6291456 00:05:35.961 buf 0x200000a00000 len 4194304 PASSED 00:05:35.961 free 0x200000500000 3145728 00:05:35.961 free 0x2000004fff40 64 00:05:35.961 unregister 0x200000400000 4194304 PASSED 00:05:35.961 free 0x200000a00000 4194304 00:05:35.961 unregister 0x200000800000 6291456 PASSED 00:05:36.220 malloc 8388608 00:05:36.220 register 0x200000400000 10485760 00:05:36.220 buf 0x200000600000 len 8388608 PASSED 00:05:36.220 free 0x200000600000 8388608 00:05:36.220 unregister 0x200000400000 10485760 PASSED 00:05:36.220 passed 00:05:36.220 00:05:36.220 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.220 suites 1 1 n/a 0 0 00:05:36.220 tests 1 1 1 0 0 00:05:36.220 asserts 15 15 15 0 n/a 00:05:36.220 00:05:36.220 Elapsed time = 0.012 seconds 00:05:36.220 00:05:36.220 real 0m0.175s 00:05:36.220 user 0m0.027s 00:05:36.220 sys 0m0.044s 00:05:36.220 14:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.220 ************************************ 00:05:36.220 END TEST env_mem_callbacks 00:05:36.220 ************************************ 00:05:36.220 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.220 00:05:36.220 real 0m3.305s 00:05:36.220 user 0m1.469s 00:05:36.220 sys 0m1.404s 00:05:36.220 ************************************ 00:05:36.220 14:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.220 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.220 END TEST env 00:05:36.220 ************************************ 00:05:36.220 14:51:59 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:36.220 14:51:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.220 14:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.220 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.220 ************************************ 00:05:36.220 START TEST rpc 00:05:36.220 ************************************ 00:05:36.220 14:51:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:36.220 * Looking for test storage... 00:05:36.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.220 14:51:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.220 14:51:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.220 14:51:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.220 14:51:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.220 14:51:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.220 14:51:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.220 14:51:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.220 14:51:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.220 14:51:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.481 14:51:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.481 14:51:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.481 14:51:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.481 14:51:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.481 14:51:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.481 14:51:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.481 14:51:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.481 14:51:59 -- scripts/common.sh@344 -- # : 1 00:05:36.481 14:51:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.481 14:51:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.481 14:51:59 -- scripts/common.sh@364 -- # decimal 1 00:05:36.481 14:51:59 -- scripts/common.sh@352 -- # local d=1 00:05:36.481 14:51:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.481 14:51:59 -- scripts/common.sh@354 -- # echo 1 00:05:36.481 14:51:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.481 14:51:59 -- scripts/common.sh@365 -- # decimal 2 00:05:36.481 14:51:59 -- scripts/common.sh@352 -- # local d=2 00:05:36.481 14:51:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.481 14:51:59 -- scripts/common.sh@354 -- # echo 2 00:05:36.481 14:51:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.481 14:51:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.481 14:51:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.481 14:51:59 -- scripts/common.sh@367 -- # return 0 00:05:36.481 14:51:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.481 14:51:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.481 --rc genhtml_branch_coverage=1 00:05:36.481 --rc genhtml_function_coverage=1 00:05:36.481 --rc genhtml_legend=1 00:05:36.481 --rc geninfo_all_blocks=1 00:05:36.481 --rc geninfo_unexecuted_blocks=1 00:05:36.481 00:05:36.481 ' 00:05:36.481 14:51:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.481 --rc genhtml_branch_coverage=1 00:05:36.481 --rc genhtml_function_coverage=1 00:05:36.481 --rc genhtml_legend=1 00:05:36.481 --rc geninfo_all_blocks=1 00:05:36.481 --rc geninfo_unexecuted_blocks=1 00:05:36.481 00:05:36.481 ' 00:05:36.481 14:51:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.481 --rc genhtml_branch_coverage=1 00:05:36.481 --rc genhtml_function_coverage=1 00:05:36.481 --rc genhtml_legend=1 00:05:36.481 --rc geninfo_all_blocks=1 00:05:36.481 --rc geninfo_unexecuted_blocks=1 00:05:36.481 00:05:36.481 ' 00:05:36.481 14:51:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.481 --rc genhtml_branch_coverage=1 00:05:36.481 --rc genhtml_function_coverage=1 00:05:36.481 --rc genhtml_legend=1 00:05:36.481 --rc geninfo_all_blocks=1 00:05:36.481 --rc geninfo_unexecuted_blocks=1 00:05:36.481 00:05:36.481 ' 00:05:36.481 14:51:59 -- rpc/rpc.sh@65 -- # spdk_pid=68541 00:05:36.481 14:51:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.481 14:51:59 -- rpc/rpc.sh@67 -- # waitforlisten 68541 00:05:36.481 14:51:59 -- common/autotest_common.sh@829 -- # '[' -z 68541 ']' 00:05:36.481 14:51:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.481 14:51:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.481 14:51:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.481 14:51:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.481 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.481 14:51:59 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:36.481 [2024-11-18 14:51:59.913957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.482 [2024-11-18 14:51:59.914118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68541 ] 00:05:36.482 [2024-11-18 14:52:00.067612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.744 [2024-11-18 14:52:00.114151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.744 [2024-11-18 14:52:00.114403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.744 [2024-11-18 14:52:00.114429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68541' to capture a snapshot of events at runtime. 00:05:36.744 [2024-11-18 14:52:00.114443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68541 for offline analysis/debug. 00:05:36.744 [2024-11-18 14:52:00.114473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.315 14:52:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.315 14:52:00 -- common/autotest_common.sh@862 -- # return 0 00:05:37.315 14:52:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.315 14:52:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.315 14:52:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.315 14:52:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.315 14:52:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.315 14:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.315 ************************************ 00:05:37.315 START TEST rpc_integrity 00:05:37.315 ************************************ 00:05:37.315 14:52:00 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:37.315 14:52:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.315 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.315 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.315 14:52:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.315 14:52:00 -- rpc/rpc.sh@13 -- # jq length 00:05:37.315 14:52:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.315 14:52:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.315 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.315 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.315 14:52:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.315 14:52:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.315 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.315 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.315 14:52:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.315 { 00:05:37.315 "name": "Malloc0", 00:05:37.315 "aliases": [ 00:05:37.315 "58fe0348-4def-4289-a9bd-4c359638b2a9" 00:05:37.315 ], 00:05:37.315 "product_name": "Malloc disk", 00:05:37.315 "block_size": 512, 00:05:37.315 "num_blocks": 16384, 00:05:37.315 "uuid": "58fe0348-4def-4289-a9bd-4c359638b2a9", 00:05:37.315 "assigned_rate_limits": { 00:05:37.315 "rw_ios_per_sec": 0, 00:05:37.315 "rw_mbytes_per_sec": 0, 00:05:37.315 "r_mbytes_per_sec": 0, 00:05:37.315 "w_mbytes_per_sec": 0 00:05:37.315 }, 00:05:37.315 "claimed": false, 00:05:37.315 "zoned": false, 00:05:37.315 "supported_io_types": { 00:05:37.315 "read": true, 00:05:37.315 "write": true, 00:05:37.315 "unmap": true, 00:05:37.315 "write_zeroes": true, 00:05:37.315 "flush": true, 00:05:37.315 "reset": true, 00:05:37.315 "compare": false, 00:05:37.315 "compare_and_write": false, 00:05:37.315 "abort": true, 00:05:37.315 "nvme_admin": false, 00:05:37.315 "nvme_io": false 00:05:37.315 }, 00:05:37.315 "memory_domains": [ 00:05:37.315 { 00:05:37.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.315 "dma_device_type": 2 00:05:37.315 } 00:05:37.315 ], 00:05:37.315 "driver_specific": {} 00:05:37.315 } 00:05:37.315 ]' 00:05:37.315 14:52:00 -- rpc/rpc.sh@17 -- # jq length 00:05:37.315 14:52:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.315 14:52:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.315 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.315 [2024-11-18 14:52:00.877211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.315 [2024-11-18 14:52:00.877304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.315 [2024-11-18 14:52:00.877363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:37.315 [2024-11-18 14:52:00.877380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.315 [2024-11-18 14:52:00.879945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.315 [2024-11-18 14:52:00.880009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.315 Passthru0 00:05:37.315 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.315 14:52:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.315 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.315 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.577 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.577 14:52:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.577 { 00:05:37.577 "name": "Malloc0", 00:05:37.577 "aliases": [ 00:05:37.577 "58fe0348-4def-4289-a9bd-4c359638b2a9" 00:05:37.577 ], 00:05:37.577 "product_name": "Malloc disk", 00:05:37.577 "block_size": 512, 00:05:37.577 "num_blocks": 16384, 00:05:37.577 "uuid": "58fe0348-4def-4289-a9bd-4c359638b2a9", 00:05:37.577 "assigned_rate_limits": { 00:05:37.577 "rw_ios_per_sec": 0, 00:05:37.577 "rw_mbytes_per_sec": 0, 00:05:37.577 "r_mbytes_per_sec": 0, 00:05:37.577 "w_mbytes_per_sec": 0 00:05:37.577 }, 00:05:37.577 "claimed": true, 00:05:37.577 "claim_type": "exclusive_write", 00:05:37.577 "zoned": false, 00:05:37.577 "supported_io_types": { 00:05:37.577 "read": true, 00:05:37.577 "write": true, 00:05:37.577 "unmap": true, 00:05:37.577 "write_zeroes": true, 00:05:37.577 "flush": true, 00:05:37.577 "reset": true, 00:05:37.577 "compare": false, 00:05:37.577 "compare_and_write": false, 00:05:37.577 "abort": true, 00:05:37.577 "nvme_admin": false, 00:05:37.577 "nvme_io": false 00:05:37.577 }, 00:05:37.577 "memory_domains": [ 00:05:37.577 { 00:05:37.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.577 "dma_device_type": 2 00:05:37.577 } 00:05:37.577 ], 00:05:37.577 "driver_specific": {} 00:05:37.577 }, 00:05:37.577 { 00:05:37.577 "name": "Passthru0", 00:05:37.577 "aliases": [ 00:05:37.577 "9f2bc3b7-de94-57b2-b09e-acc27c191571" 00:05:37.577 ], 00:05:37.577 "product_name": "passthru", 00:05:37.577 "block_size": 512, 00:05:37.578 "num_blocks": 16384, 00:05:37.578 "uuid": "9f2bc3b7-de94-57b2-b09e-acc27c191571", 00:05:37.578 "assigned_rate_limits": { 00:05:37.578 "rw_ios_per_sec": 0, 00:05:37.578 "rw_mbytes_per_sec": 0, 00:05:37.578 "r_mbytes_per_sec": 0, 00:05:37.578 "w_mbytes_per_sec": 0 00:05:37.578 }, 00:05:37.578 "claimed": false, 00:05:37.578 "zoned": false, 00:05:37.578 "supported_io_types": { 00:05:37.578 "read": true, 00:05:37.578 "write": true, 00:05:37.578 "unmap": true, 00:05:37.578 "write_zeroes": true, 00:05:37.578 "flush": true, 00:05:37.578 "reset": true, 00:05:37.578 "compare": false, 00:05:37.578 "compare_and_write": false, 00:05:37.578 "abort": true, 00:05:37.578 "nvme_admin": false, 00:05:37.578 "nvme_io": false 00:05:37.578 }, 00:05:37.578 "memory_domains": [ 00:05:37.578 { 00:05:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.578 "dma_device_type": 2 00:05:37.578 } 00:05:37.578 ], 00:05:37.578 "driver_specific": { 00:05:37.578 "passthru": { 00:05:37.578 "name": "Passthru0", 00:05:37.578 "base_bdev_name": "Malloc0" 00:05:37.578 } 00:05:37.578 } 00:05:37.578 } 00:05:37.578 ]' 00:05:37.578 14:52:00 -- rpc/rpc.sh@21 -- # jq length 00:05:37.578 14:52:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.578 14:52:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.578 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.578 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.578 14:52:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.578 14:52:00 -- rpc/rpc.sh@26 -- # jq length 00:05:37.578 ************************************ 00:05:37.578 END TEST rpc_integrity 00:05:37.578 ************************************ 00:05:37.578 14:52:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.578 00:05:37.578 real 0m0.230s 00:05:37.578 user 0m0.121s 00:05:37.578 sys 0m0.042s 00:05:37.578 14:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.578 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:01 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.578 14:52:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.578 14:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.578 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 ************************************ 00:05:37.578 START TEST rpc_plugins 00:05:37.578 ************************************ 00:05:37.578 14:52:01 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:37.578 14:52:01 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.578 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:01 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.578 14:52:01 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.578 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:01 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.578 { 00:05:37.578 "name": "Malloc1", 00:05:37.578 "aliases": [ 00:05:37.578 "7bfc567c-d5ef-484b-8aa3-08b8abcde18f" 00:05:37.578 ], 00:05:37.578 "product_name": "Malloc disk", 00:05:37.578 "block_size": 4096, 00:05:37.578 "num_blocks": 256, 00:05:37.578 "uuid": "7bfc567c-d5ef-484b-8aa3-08b8abcde18f", 00:05:37.578 "assigned_rate_limits": { 00:05:37.578 "rw_ios_per_sec": 0, 00:05:37.578 "rw_mbytes_per_sec": 0, 00:05:37.578 "r_mbytes_per_sec": 0, 00:05:37.578 "w_mbytes_per_sec": 0 00:05:37.578 }, 00:05:37.578 "claimed": false, 00:05:37.578 "zoned": false, 00:05:37.578 "supported_io_types": { 00:05:37.578 "read": true, 00:05:37.578 "write": true, 00:05:37.578 "unmap": true, 00:05:37.578 "write_zeroes": true, 00:05:37.578 "flush": true, 00:05:37.578 "reset": true, 00:05:37.578 "compare": false, 00:05:37.578 "compare_and_write": false, 00:05:37.578 "abort": true, 00:05:37.578 "nvme_admin": false, 00:05:37.578 "nvme_io": false 00:05:37.578 }, 00:05:37.578 "memory_domains": [ 00:05:37.578 { 00:05:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.578 "dma_device_type": 2 00:05:37.578 } 00:05:37.578 ], 00:05:37.578 "driver_specific": {} 00:05:37.578 } 00:05:37.578 ]' 00:05:37.578 14:52:01 -- rpc/rpc.sh@32 -- # jq length 00:05:37.578 14:52:01 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.578 14:52:01 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.578 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:01 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.578 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.578 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.578 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.578 14:52:01 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.578 14:52:01 -- rpc/rpc.sh@36 -- # jq length 00:05:37.839 ************************************ 00:05:37.839 END TEST rpc_plugins 00:05:37.839 ************************************ 00:05:37.839 14:52:01 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.839 00:05:37.839 real 0m0.118s 00:05:37.839 user 0m0.066s 00:05:37.839 sys 0m0.017s 00:05:37.839 14:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.839 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.839 14:52:01 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.839 14:52:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.839 14:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.839 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.839 ************************************ 00:05:37.839 START TEST rpc_trace_cmd_test 00:05:37.839 ************************************ 00:05:37.839 14:52:01 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:37.839 14:52:01 -- rpc/rpc.sh@40 -- # local info 00:05:37.839 14:52:01 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.839 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.839 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:37.839 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.839 14:52:01 -- rpc/rpc.sh@42 -- # info='{ 00:05:37.839 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68541", 00:05:37.839 "tpoint_group_mask": "0x8", 00:05:37.839 "iscsi_conn": { 00:05:37.839 "mask": "0x2", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "scsi": { 00:05:37.839 "mask": "0x4", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "bdev": { 00:05:37.839 "mask": "0x8", 00:05:37.839 "tpoint_mask": "0xffffffffffffffff" 00:05:37.839 }, 00:05:37.839 "nvmf_rdma": { 00:05:37.839 "mask": "0x10", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "nvmf_tcp": { 00:05:37.839 "mask": "0x20", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "ftl": { 00:05:37.839 "mask": "0x40", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "blobfs": { 00:05:37.839 "mask": "0x80", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "dsa": { 00:05:37.839 "mask": "0x200", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "thread": { 00:05:37.839 "mask": "0x400", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "nvme_pcie": { 00:05:37.839 "mask": "0x800", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "iaa": { 00:05:37.839 "mask": "0x1000", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "nvme_tcp": { 00:05:37.839 "mask": "0x2000", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 }, 00:05:37.839 "bdev_nvme": { 00:05:37.839 "mask": "0x4000", 00:05:37.839 "tpoint_mask": "0x0" 00:05:37.839 } 00:05:37.839 }' 00:05:37.839 14:52:01 -- rpc/rpc.sh@43 -- # jq length 00:05:37.839 14:52:01 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:37.839 14:52:01 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:37.839 14:52:01 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:37.839 14:52:01 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:37.839 14:52:01 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:37.839 14:52:01 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:37.839 14:52:01 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:37.839 14:52:01 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.839 ************************************ 00:05:37.839 END TEST rpc_trace_cmd_test 00:05:37.839 ************************************ 00:05:37.839 14:52:01 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.839 00:05:37.839 real 0m0.162s 00:05:37.839 user 0m0.127s 00:05:37.839 sys 0m0.024s 00:05:37.839 14:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.839 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.101 14:52:01 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.101 14:52:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.101 14:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 ************************************ 00:05:38.101 START TEST rpc_daemon_integrity 00:05:38.101 ************************************ 00:05:38.101 14:52:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:38.101 14:52:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.101 14:52:01 -- rpc/rpc.sh@13 -- # jq length 00:05:38.101 14:52:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.101 14:52:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.101 14:52:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.101 { 00:05:38.101 "name": "Malloc2", 00:05:38.101 "aliases": [ 00:05:38.101 "8b9efdc3-66ae-431c-ba8f-8969ba456c48" 00:05:38.101 ], 00:05:38.101 "product_name": "Malloc disk", 00:05:38.101 "block_size": 512, 00:05:38.101 "num_blocks": 16384, 00:05:38.101 "uuid": "8b9efdc3-66ae-431c-ba8f-8969ba456c48", 00:05:38.101 "assigned_rate_limits": { 00:05:38.101 "rw_ios_per_sec": 0, 00:05:38.101 "rw_mbytes_per_sec": 0, 00:05:38.101 "r_mbytes_per_sec": 0, 00:05:38.101 "w_mbytes_per_sec": 0 00:05:38.101 }, 00:05:38.101 "claimed": false, 00:05:38.101 "zoned": false, 00:05:38.101 "supported_io_types": { 00:05:38.101 "read": true, 00:05:38.101 "write": true, 00:05:38.101 "unmap": true, 00:05:38.101 "write_zeroes": true, 00:05:38.101 "flush": true, 00:05:38.101 "reset": true, 00:05:38.101 "compare": false, 00:05:38.101 "compare_and_write": false, 00:05:38.101 "abort": true, 00:05:38.101 "nvme_admin": false, 00:05:38.101 "nvme_io": false 00:05:38.101 }, 00:05:38.101 "memory_domains": [ 00:05:38.101 { 00:05:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.101 "dma_device_type": 2 00:05:38.101 } 00:05:38.101 ], 00:05:38.101 "driver_specific": {} 00:05:38.101 } 00:05:38.101 ]' 00:05:38.101 14:52:01 -- rpc/rpc.sh@17 -- # jq length 00:05:38.101 14:52:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.101 14:52:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 [2024-11-18 14:52:01.553631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.101 [2024-11-18 14:52:01.553685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.101 [2024-11-18 14:52:01.553702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:38.101 [2024-11-18 14:52:01.553712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.101 [2024-11-18 14:52:01.555833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.101 [2024-11-18 14:52:01.555870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.101 Passthru0 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.101 { 00:05:38.101 "name": "Malloc2", 00:05:38.101 "aliases": [ 00:05:38.101 "8b9efdc3-66ae-431c-ba8f-8969ba456c48" 00:05:38.101 ], 00:05:38.101 "product_name": "Malloc disk", 00:05:38.101 "block_size": 512, 00:05:38.101 "num_blocks": 16384, 00:05:38.101 "uuid": "8b9efdc3-66ae-431c-ba8f-8969ba456c48", 00:05:38.101 "assigned_rate_limits": { 00:05:38.101 "rw_ios_per_sec": 0, 00:05:38.101 "rw_mbytes_per_sec": 0, 00:05:38.101 "r_mbytes_per_sec": 0, 00:05:38.101 "w_mbytes_per_sec": 0 00:05:38.101 }, 00:05:38.101 "claimed": true, 00:05:38.101 "claim_type": "exclusive_write", 00:05:38.101 "zoned": false, 00:05:38.101 "supported_io_types": { 00:05:38.101 "read": true, 00:05:38.101 "write": true, 00:05:38.101 "unmap": true, 00:05:38.101 "write_zeroes": true, 00:05:38.101 "flush": true, 00:05:38.101 "reset": true, 00:05:38.101 "compare": false, 00:05:38.101 "compare_and_write": false, 00:05:38.101 "abort": true, 00:05:38.101 "nvme_admin": false, 00:05:38.101 "nvme_io": false 00:05:38.101 }, 00:05:38.101 "memory_domains": [ 00:05:38.101 { 00:05:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.101 "dma_device_type": 2 00:05:38.101 } 00:05:38.101 ], 00:05:38.101 "driver_specific": {} 00:05:38.101 }, 00:05:38.101 { 00:05:38.101 "name": "Passthru0", 00:05:38.101 "aliases": [ 00:05:38.101 "6ba4dd4f-73d4-5778-84a1-35e18ce06e9b" 00:05:38.101 ], 00:05:38.101 "product_name": "passthru", 00:05:38.101 "block_size": 512, 00:05:38.101 "num_blocks": 16384, 00:05:38.101 "uuid": "6ba4dd4f-73d4-5778-84a1-35e18ce06e9b", 00:05:38.101 "assigned_rate_limits": { 00:05:38.101 "rw_ios_per_sec": 0, 00:05:38.101 "rw_mbytes_per_sec": 0, 00:05:38.101 "r_mbytes_per_sec": 0, 00:05:38.101 "w_mbytes_per_sec": 0 00:05:38.101 }, 00:05:38.101 "claimed": false, 00:05:38.101 "zoned": false, 00:05:38.101 "supported_io_types": { 00:05:38.101 "read": true, 00:05:38.101 "write": true, 00:05:38.101 "unmap": true, 00:05:38.101 "write_zeroes": true, 00:05:38.101 "flush": true, 00:05:38.101 "reset": true, 00:05:38.101 "compare": false, 00:05:38.101 "compare_and_write": false, 00:05:38.101 "abort": true, 00:05:38.101 "nvme_admin": false, 00:05:38.101 "nvme_io": false 00:05:38.101 }, 00:05:38.101 "memory_domains": [ 00:05:38.101 { 00:05:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.101 "dma_device_type": 2 00:05:38.101 } 00:05:38.101 ], 00:05:38.101 "driver_specific": { 00:05:38.101 "passthru": { 00:05:38.101 "name": "Passthru0", 00:05:38.101 "base_bdev_name": "Malloc2" 00:05:38.101 } 00:05:38.101 } 00:05:38.101 } 00:05:38.101 ]' 00:05:38.101 14:52:01 -- rpc/rpc.sh@21 -- # jq length 00:05:38.101 14:52:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.101 14:52:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.101 14:52:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.101 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 14:52:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.101 14:52:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.101 14:52:01 -- rpc/rpc.sh@26 -- # jq length 00:05:38.101 ************************************ 00:05:38.101 END TEST rpc_daemon_integrity 00:05:38.102 ************************************ 00:05:38.102 14:52:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.102 00:05:38.102 real 0m0.214s 00:05:38.102 user 0m0.125s 00:05:38.102 sys 0m0.027s 00:05:38.102 14:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.102 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.363 14:52:01 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.363 14:52:01 -- rpc/rpc.sh@84 -- # killprocess 68541 00:05:38.363 14:52:01 -- common/autotest_common.sh@936 -- # '[' -z 68541 ']' 00:05:38.363 14:52:01 -- common/autotest_common.sh@940 -- # kill -0 68541 00:05:38.363 14:52:01 -- common/autotest_common.sh@941 -- # uname 00:05:38.363 14:52:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.363 14:52:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68541 00:05:38.363 killing process with pid 68541 00:05:38.363 14:52:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.363 14:52:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.363 14:52:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68541' 00:05:38.363 14:52:01 -- common/autotest_common.sh@955 -- # kill 68541 00:05:38.363 14:52:01 -- common/autotest_common.sh@960 -- # wait 68541 00:05:38.624 00:05:38.624 real 0m2.313s 00:05:38.624 user 0m2.660s 00:05:38.624 sys 0m0.676s 00:05:38.624 14:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.624 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.624 ************************************ 00:05:38.624 END TEST rpc 00:05:38.624 ************************************ 00:05:38.624 14:52:02 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.624 14:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.624 14:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.624 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.624 ************************************ 00:05:38.624 START TEST rpc_client 00:05:38.624 ************************************ 00:05:38.624 14:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.624 * Looking for test storage... 00:05:38.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:38.624 14:52:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.624 14:52:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.624 14:52:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.624 14:52:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.624 14:52:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.624 14:52:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.624 14:52:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.624 14:52:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.624 14:52:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.625 14:52:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.625 14:52:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.625 14:52:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.625 14:52:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.625 14:52:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.625 14:52:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.625 14:52:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.625 14:52:02 -- scripts/common.sh@344 -- # : 1 00:05:38.625 14:52:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.625 14:52:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.625 14:52:02 -- scripts/common.sh@364 -- # decimal 1 00:05:38.625 14:52:02 -- scripts/common.sh@352 -- # local d=1 00:05:38.625 14:52:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.625 14:52:02 -- scripts/common.sh@354 -- # echo 1 00:05:38.625 14:52:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.625 14:52:02 -- scripts/common.sh@365 -- # decimal 2 00:05:38.625 14:52:02 -- scripts/common.sh@352 -- # local d=2 00:05:38.625 14:52:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.625 14:52:02 -- scripts/common.sh@354 -- # echo 2 00:05:38.625 14:52:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.625 14:52:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.625 14:52:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.625 14:52:02 -- scripts/common.sh@367 -- # return 0 00:05:38.625 14:52:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.625 14:52:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.625 --rc genhtml_branch_coverage=1 00:05:38.625 --rc genhtml_function_coverage=1 00:05:38.625 --rc genhtml_legend=1 00:05:38.625 --rc geninfo_all_blocks=1 00:05:38.625 --rc geninfo_unexecuted_blocks=1 00:05:38.625 00:05:38.625 ' 00:05:38.625 14:52:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.625 --rc genhtml_branch_coverage=1 00:05:38.625 --rc genhtml_function_coverage=1 00:05:38.625 --rc genhtml_legend=1 00:05:38.625 --rc geninfo_all_blocks=1 00:05:38.625 --rc geninfo_unexecuted_blocks=1 00:05:38.625 00:05:38.625 ' 00:05:38.625 14:52:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.625 --rc genhtml_branch_coverage=1 00:05:38.625 --rc genhtml_function_coverage=1 00:05:38.625 --rc genhtml_legend=1 00:05:38.625 --rc geninfo_all_blocks=1 00:05:38.625 --rc geninfo_unexecuted_blocks=1 00:05:38.625 00:05:38.625 ' 00:05:38.625 14:52:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.625 --rc genhtml_branch_coverage=1 00:05:38.625 --rc genhtml_function_coverage=1 00:05:38.625 --rc genhtml_legend=1 00:05:38.625 --rc geninfo_all_blocks=1 00:05:38.625 --rc geninfo_unexecuted_blocks=1 00:05:38.625 00:05:38.625 ' 00:05:38.625 14:52:02 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:38.625 OK 00:05:38.885 14:52:02 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:38.885 00:05:38.885 real 0m0.182s 00:05:38.885 user 0m0.100s 00:05:38.885 sys 0m0.091s 00:05:38.885 14:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.885 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 END TEST rpc_client 00:05:38.885 ************************************ 00:05:38.885 14:52:02 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.885 14:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.885 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.885 ************************************ 00:05:38.885 START TEST json_config 00:05:38.885 ************************************ 00:05:38.885 14:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.885 14:52:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.885 14:52:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.885 14:52:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.885 14:52:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.885 14:52:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.885 14:52:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.885 14:52:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.885 14:52:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.885 14:52:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.885 14:52:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.885 14:52:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.885 14:52:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.885 14:52:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.885 14:52:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.885 14:52:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.885 14:52:02 -- scripts/common.sh@344 -- # : 1 00:05:38.885 14:52:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.885 14:52:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.885 14:52:02 -- scripts/common.sh@364 -- # decimal 1 00:05:38.885 14:52:02 -- scripts/common.sh@352 -- # local d=1 00:05:38.885 14:52:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.885 14:52:02 -- scripts/common.sh@354 -- # echo 1 00:05:38.885 14:52:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.885 14:52:02 -- scripts/common.sh@365 -- # decimal 2 00:05:38.885 14:52:02 -- scripts/common.sh@352 -- # local d=2 00:05:38.885 14:52:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.885 14:52:02 -- scripts/common.sh@354 -- # echo 2 00:05:38.885 14:52:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.885 14:52:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.885 14:52:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.885 14:52:02 -- scripts/common.sh@367 -- # return 0 00:05:38.885 14:52:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.885 --rc genhtml_branch_coverage=1 00:05:38.885 --rc genhtml_function_coverage=1 00:05:38.885 --rc genhtml_legend=1 00:05:38.885 --rc geninfo_all_blocks=1 00:05:38.885 --rc geninfo_unexecuted_blocks=1 00:05:38.885 00:05:38.885 ' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.885 --rc genhtml_branch_coverage=1 00:05:38.885 --rc genhtml_function_coverage=1 00:05:38.885 --rc genhtml_legend=1 00:05:38.885 --rc geninfo_all_blocks=1 00:05:38.885 --rc geninfo_unexecuted_blocks=1 00:05:38.885 00:05:38.885 ' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.885 --rc genhtml_branch_coverage=1 00:05:38.885 --rc genhtml_function_coverage=1 00:05:38.885 --rc genhtml_legend=1 00:05:38.885 --rc geninfo_all_blocks=1 00:05:38.885 --rc geninfo_unexecuted_blocks=1 00:05:38.885 00:05:38.885 ' 00:05:38.885 14:52:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.885 --rc genhtml_branch_coverage=1 00:05:38.885 --rc genhtml_function_coverage=1 00:05:38.885 --rc genhtml_legend=1 00:05:38.885 --rc geninfo_all_blocks=1 00:05:38.885 --rc geninfo_unexecuted_blocks=1 00:05:38.885 00:05:38.885 ' 00:05:38.885 14:52:02 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.885 14:52:02 -- nvmf/common.sh@7 -- # uname -s 00:05:38.885 14:52:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.885 14:52:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.885 14:52:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.885 14:52:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.885 14:52:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.885 14:52:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.885 14:52:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.885 14:52:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.885 14:52:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.885 14:52:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.885 14:52:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b078765-971d-4fa4-a361-94e987528f08 00:05:38.885 14:52:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b078765-971d-4fa4-a361-94e987528f08 00:05:38.885 14:52:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.886 14:52:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.886 14:52:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.886 14:52:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.886 14:52:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.886 14:52:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.886 14:52:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.886 14:52:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.886 14:52:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.886 14:52:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.886 14:52:02 -- paths/export.sh@5 -- # export PATH 00:05:38.886 14:52:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.886 14:52:02 -- nvmf/common.sh@46 -- # : 0 00:05:38.886 14:52:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:38.886 14:52:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:38.886 14:52:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:38.886 14:52:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.886 14:52:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.886 14:52:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:38.886 14:52:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:38.886 14:52:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:38.886 14:52:02 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:38.886 14:52:02 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:38.886 14:52:02 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:38.886 14:52:02 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:38.886 WARNING: No tests are enabled so not running JSON configuration tests 00:05:38.886 14:52:02 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:38.886 14:52:02 -- json_config/json_config.sh@27 -- # exit 0 00:05:38.886 00:05:38.886 real 0m0.145s 00:05:38.886 user 0m0.087s 00:05:38.886 sys 0m0.059s 00:05:38.886 14:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.886 ************************************ 00:05:38.886 END TEST json_config 00:05:38.886 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.886 ************************************ 00:05:38.886 14:52:02 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:38.886 14:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.886 14:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.886 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.886 ************************************ 00:05:38.886 START TEST json_config_extra_key 00:05:38.886 ************************************ 00:05:38.886 14:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.147 14:52:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.147 14:52:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.147 14:52:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.147 14:52:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.147 14:52:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.147 14:52:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.147 14:52:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.147 14:52:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.147 14:52:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.147 14:52:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.147 14:52:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.147 14:52:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.147 14:52:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.147 14:52:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.147 14:52:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.147 14:52:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.147 14:52:02 -- scripts/common.sh@344 -- # : 1 00:05:39.147 14:52:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.147 14:52:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.147 14:52:02 -- scripts/common.sh@364 -- # decimal 1 00:05:39.147 14:52:02 -- scripts/common.sh@352 -- # local d=1 00:05:39.147 14:52:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.147 14:52:02 -- scripts/common.sh@354 -- # echo 1 00:05:39.147 14:52:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.147 14:52:02 -- scripts/common.sh@365 -- # decimal 2 00:05:39.147 14:52:02 -- scripts/common.sh@352 -- # local d=2 00:05:39.147 14:52:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.147 14:52:02 -- scripts/common.sh@354 -- # echo 2 00:05:39.147 14:52:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.147 14:52:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.147 14:52:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.147 14:52:02 -- scripts/common.sh@367 -- # return 0 00:05:39.147 14:52:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.147 14:52:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.147 --rc genhtml_branch_coverage=1 00:05:39.147 --rc genhtml_function_coverage=1 00:05:39.147 --rc genhtml_legend=1 00:05:39.147 --rc geninfo_all_blocks=1 00:05:39.147 --rc geninfo_unexecuted_blocks=1 00:05:39.147 00:05:39.147 ' 00:05:39.147 14:52:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.147 --rc genhtml_branch_coverage=1 00:05:39.147 --rc genhtml_function_coverage=1 00:05:39.147 --rc genhtml_legend=1 00:05:39.147 --rc geninfo_all_blocks=1 00:05:39.147 --rc geninfo_unexecuted_blocks=1 00:05:39.147 00:05:39.147 ' 00:05:39.147 14:52:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.147 --rc genhtml_branch_coverage=1 00:05:39.147 --rc genhtml_function_coverage=1 00:05:39.147 --rc genhtml_legend=1 00:05:39.147 --rc geninfo_all_blocks=1 00:05:39.147 --rc geninfo_unexecuted_blocks=1 00:05:39.147 00:05:39.147 ' 00:05:39.147 14:52:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.147 --rc genhtml_branch_coverage=1 00:05:39.147 --rc genhtml_function_coverage=1 00:05:39.147 --rc genhtml_legend=1 00:05:39.147 --rc geninfo_all_blocks=1 00:05:39.147 --rc geninfo_unexecuted_blocks=1 00:05:39.147 00:05:39.147 ' 00:05:39.147 14:52:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.147 14:52:02 -- nvmf/common.sh@7 -- # uname -s 00:05:39.147 14:52:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.147 14:52:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.147 14:52:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.147 14:52:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.147 14:52:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.148 14:52:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.148 14:52:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.148 14:52:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.148 14:52:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.148 14:52:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.148 14:52:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b078765-971d-4fa4-a361-94e987528f08 00:05:39.148 14:52:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b078765-971d-4fa4-a361-94e987528f08 00:05:39.148 14:52:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.148 14:52:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.148 14:52:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.148 14:52:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.148 14:52:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.148 14:52:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.148 14:52:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.148 14:52:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.148 14:52:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.148 14:52:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.148 14:52:02 -- paths/export.sh@5 -- # export PATH 00:05:39.148 14:52:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.148 14:52:02 -- nvmf/common.sh@46 -- # : 0 00:05:39.148 14:52:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:39.148 14:52:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:39.148 14:52:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:39.148 14:52:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.148 14:52:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.148 14:52:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:39.148 14:52:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:39.148 14:52:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.148 INFO: launching applications... 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68834 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:39.148 Waiting for target to run... 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68834 /var/tmp/spdk_tgt.sock 00:05:39.148 14:52:02 -- common/autotest_common.sh@829 -- # '[' -z 68834 ']' 00:05:39.148 14:52:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.148 14:52:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.148 14:52:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.148 14:52:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.148 14:52:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.148 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.148 [2024-11-18 14:52:02.674475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.148 [2024-11-18 14:52:02.674780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68834 ] 00:05:39.410 [2024-11-18 14:52:02.979077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.410 [2024-11-18 14:52:02.995637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.410 [2024-11-18 14:52:02.995941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.981 00:05:39.981 INFO: shutting down applications... 00:05:39.981 14:52:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.981 14:52:03 -- common/autotest_common.sh@862 -- # return 0 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68834 ]] 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68834 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68834 00:05:39.981 14:52:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68834 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:40.551 SPDK target shutdown done 00:05:40.551 14:52:03 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:40.551 Success 00:05:40.551 00:05:40.551 real 0m1.540s 00:05:40.551 user 0m1.226s 00:05:40.551 sys 0m0.336s 00:05:40.551 14:52:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.551 ************************************ 00:05:40.551 END TEST json_config_extra_key 00:05:40.551 14:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.551 ************************************ 00:05:40.551 14:52:04 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.551 14:52:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.551 14:52:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.551 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.551 ************************************ 00:05:40.551 START TEST alias_rpc 00:05:40.551 ************************************ 00:05:40.551 14:52:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.551 * Looking for test storage... 00:05:40.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:40.551 14:52:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.551 14:52:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.551 14:52:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.813 14:52:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.813 14:52:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.813 14:52:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.813 14:52:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.813 14:52:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.813 14:52:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.813 14:52:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.813 14:52:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.813 14:52:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.813 14:52:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.813 14:52:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.813 14:52:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.813 14:52:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.813 14:52:04 -- scripts/common.sh@344 -- # : 1 00:05:40.813 14:52:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.813 14:52:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.813 14:52:04 -- scripts/common.sh@364 -- # decimal 1 00:05:40.813 14:52:04 -- scripts/common.sh@352 -- # local d=1 00:05:40.813 14:52:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.813 14:52:04 -- scripts/common.sh@354 -- # echo 1 00:05:40.813 14:52:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.813 14:52:04 -- scripts/common.sh@365 -- # decimal 2 00:05:40.813 14:52:04 -- scripts/common.sh@352 -- # local d=2 00:05:40.813 14:52:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.813 14:52:04 -- scripts/common.sh@354 -- # echo 2 00:05:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.813 14:52:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.813 14:52:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.813 14:52:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.813 14:52:04 -- scripts/common.sh@367 -- # return 0 00:05:40.813 14:52:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.813 14:52:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.813 --rc genhtml_branch_coverage=1 00:05:40.813 --rc genhtml_function_coverage=1 00:05:40.813 --rc genhtml_legend=1 00:05:40.813 --rc geninfo_all_blocks=1 00:05:40.813 --rc geninfo_unexecuted_blocks=1 00:05:40.813 00:05:40.813 ' 00:05:40.813 14:52:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.813 --rc genhtml_branch_coverage=1 00:05:40.813 --rc genhtml_function_coverage=1 00:05:40.813 --rc genhtml_legend=1 00:05:40.813 --rc geninfo_all_blocks=1 00:05:40.813 --rc geninfo_unexecuted_blocks=1 00:05:40.813 00:05:40.813 ' 00:05:40.813 14:52:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.813 --rc genhtml_branch_coverage=1 00:05:40.813 --rc genhtml_function_coverage=1 00:05:40.813 --rc genhtml_legend=1 00:05:40.813 --rc geninfo_all_blocks=1 00:05:40.813 --rc geninfo_unexecuted_blocks=1 00:05:40.813 00:05:40.813 ' 00:05:40.813 14:52:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.813 --rc genhtml_branch_coverage=1 00:05:40.813 --rc genhtml_function_coverage=1 00:05:40.813 --rc genhtml_legend=1 00:05:40.813 --rc geninfo_all_blocks=1 00:05:40.813 --rc geninfo_unexecuted_blocks=1 00:05:40.813 00:05:40.813 ' 00:05:40.813 14:52:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.813 14:52:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68906 00:05:40.813 14:52:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68906 00:05:40.813 14:52:04 -- common/autotest_common.sh@829 -- # '[' -z 68906 ']' 00:05:40.813 14:52:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.813 14:52:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.813 14:52:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.813 14:52:04 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.813 14:52:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.813 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.813 [2024-11-18 14:52:04.275354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.813 [2024-11-18 14:52:04.275490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:05:41.075 [2024-11-18 14:52:04.430852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.075 [2024-11-18 14:52:04.480291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.075 [2024-11-18 14:52:04.480549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.647 14:52:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.647 14:52:05 -- common/autotest_common.sh@862 -- # return 0 00:05:41.647 14:52:05 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.906 14:52:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68906 00:05:41.906 14:52:05 -- common/autotest_common.sh@936 -- # '[' -z 68906 ']' 00:05:41.906 14:52:05 -- common/autotest_common.sh@940 -- # kill -0 68906 00:05:41.906 14:52:05 -- common/autotest_common.sh@941 -- # uname 00:05:41.906 14:52:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.906 14:52:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68906 00:05:41.906 killing process with pid 68906 00:05:41.906 14:52:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.906 14:52:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.906 14:52:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68906' 00:05:41.906 14:52:05 -- common/autotest_common.sh@955 -- # kill 68906 00:05:41.906 14:52:05 -- common/autotest_common.sh@960 -- # wait 68906 00:05:42.167 ************************************ 00:05:42.167 END TEST alias_rpc 00:05:42.167 ************************************ 00:05:42.167 00:05:42.167 real 0m1.616s 00:05:42.167 user 0m1.693s 00:05:42.167 sys 0m0.445s 00:05:42.167 14:52:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.167 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.167 14:52:05 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:42.167 14:52:05 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.167 14:52:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.167 14:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.167 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.167 ************************************ 00:05:42.167 START TEST spdkcli_tcp 00:05:42.167 ************************************ 00:05:42.167 14:52:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.426 * Looking for test storage... 00:05:42.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:42.426 14:52:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.426 14:52:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.426 14:52:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.426 14:52:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.426 14:52:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.426 14:52:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.426 14:52:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.426 14:52:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.426 14:52:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.426 14:52:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.426 14:52:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.426 14:52:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.426 14:52:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.426 14:52:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.426 14:52:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.426 14:52:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.426 14:52:05 -- scripts/common.sh@344 -- # : 1 00:05:42.426 14:52:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.426 14:52:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.426 14:52:05 -- scripts/common.sh@364 -- # decimal 1 00:05:42.426 14:52:05 -- scripts/common.sh@352 -- # local d=1 00:05:42.426 14:52:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.426 14:52:05 -- scripts/common.sh@354 -- # echo 1 00:05:42.426 14:52:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.426 14:52:05 -- scripts/common.sh@365 -- # decimal 2 00:05:42.426 14:52:05 -- scripts/common.sh@352 -- # local d=2 00:05:42.426 14:52:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.426 14:52:05 -- scripts/common.sh@354 -- # echo 2 00:05:42.426 14:52:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.426 14:52:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.426 14:52:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.426 14:52:05 -- scripts/common.sh@367 -- # return 0 00:05:42.426 14:52:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.426 14:52:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.426 --rc genhtml_branch_coverage=1 00:05:42.426 --rc genhtml_function_coverage=1 00:05:42.426 --rc genhtml_legend=1 00:05:42.426 --rc geninfo_all_blocks=1 00:05:42.426 --rc geninfo_unexecuted_blocks=1 00:05:42.426 00:05:42.426 ' 00:05:42.426 14:52:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.426 --rc genhtml_branch_coverage=1 00:05:42.426 --rc genhtml_function_coverage=1 00:05:42.426 --rc genhtml_legend=1 00:05:42.426 --rc geninfo_all_blocks=1 00:05:42.426 --rc geninfo_unexecuted_blocks=1 00:05:42.426 00:05:42.426 ' 00:05:42.426 14:52:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.426 --rc genhtml_branch_coverage=1 00:05:42.426 --rc genhtml_function_coverage=1 00:05:42.426 --rc genhtml_legend=1 00:05:42.426 --rc geninfo_all_blocks=1 00:05:42.426 --rc geninfo_unexecuted_blocks=1 00:05:42.426 00:05:42.426 ' 00:05:42.426 14:52:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.426 --rc genhtml_branch_coverage=1 00:05:42.426 --rc genhtml_function_coverage=1 00:05:42.426 --rc genhtml_legend=1 00:05:42.426 --rc geninfo_all_blocks=1 00:05:42.426 --rc geninfo_unexecuted_blocks=1 00:05:42.426 00:05:42.426 ' 00:05:42.426 14:52:05 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:42.426 14:52:05 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:42.426 14:52:05 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:42.426 14:52:05 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.426 14:52:05 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.426 14:52:05 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.427 14:52:05 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.427 14:52:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.427 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.427 14:52:05 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=68985 00:05:42.427 14:52:05 -- spdkcli/tcp.sh@27 -- # waitforlisten 68985 00:05:42.427 14:52:05 -- common/autotest_common.sh@829 -- # '[' -z 68985 ']' 00:05:42.427 14:52:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.427 14:52:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.427 14:52:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.427 14:52:05 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.427 14:52:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.427 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.427 [2024-11-18 14:52:05.955224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.427 [2024-11-18 14:52:05.955363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68985 ] 00:05:42.685 [2024-11-18 14:52:06.101988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.685 [2024-11-18 14:52:06.133847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.685 [2024-11-18 14:52:06.134259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.685 [2024-11-18 14:52:06.134364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.250 14:52:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.250 14:52:06 -- common/autotest_common.sh@862 -- # return 0 00:05:43.250 14:52:06 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.250 14:52:06 -- spdkcli/tcp.sh@31 -- # socat_pid=69002 00:05:43.250 14:52:06 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.510 [ 00:05:43.510 "bdev_malloc_delete", 00:05:43.510 "bdev_malloc_create", 00:05:43.510 "bdev_null_resize", 00:05:43.510 "bdev_null_delete", 00:05:43.510 "bdev_null_create", 00:05:43.510 "bdev_nvme_cuse_unregister", 00:05:43.510 "bdev_nvme_cuse_register", 00:05:43.510 "bdev_opal_new_user", 00:05:43.510 "bdev_opal_set_lock_state", 00:05:43.510 "bdev_opal_delete", 00:05:43.510 "bdev_opal_get_info", 00:05:43.510 "bdev_opal_create", 00:05:43.510 "bdev_nvme_opal_revert", 00:05:43.510 "bdev_nvme_opal_init", 00:05:43.510 "bdev_nvme_send_cmd", 00:05:43.510 "bdev_nvme_get_path_iostat", 00:05:43.510 "bdev_nvme_get_mdns_discovery_info", 00:05:43.510 "bdev_nvme_stop_mdns_discovery", 00:05:43.510 "bdev_nvme_start_mdns_discovery", 00:05:43.510 "bdev_nvme_set_multipath_policy", 00:05:43.510 "bdev_nvme_set_preferred_path", 00:05:43.510 "bdev_nvme_get_io_paths", 00:05:43.510 "bdev_nvme_remove_error_injection", 00:05:43.510 "bdev_nvme_add_error_injection", 00:05:43.510 "bdev_nvme_get_discovery_info", 00:05:43.510 "bdev_nvme_stop_discovery", 00:05:43.510 "bdev_nvme_start_discovery", 00:05:43.510 "bdev_nvme_get_controller_health_info", 00:05:43.510 "bdev_nvme_disable_controller", 00:05:43.510 "bdev_nvme_enable_controller", 00:05:43.511 "bdev_nvme_reset_controller", 00:05:43.511 "bdev_nvme_get_transport_statistics", 00:05:43.511 "bdev_nvme_apply_firmware", 00:05:43.511 "bdev_nvme_detach_controller", 00:05:43.511 "bdev_nvme_get_controllers", 00:05:43.511 "bdev_nvme_attach_controller", 00:05:43.511 "bdev_nvme_set_hotplug", 00:05:43.511 "bdev_nvme_set_options", 00:05:43.511 "bdev_passthru_delete", 00:05:43.511 "bdev_passthru_create", 00:05:43.511 "bdev_lvol_grow_lvstore", 00:05:43.511 "bdev_lvol_get_lvols", 00:05:43.511 "bdev_lvol_get_lvstores", 00:05:43.511 "bdev_lvol_delete", 00:05:43.511 "bdev_lvol_set_read_only", 00:05:43.511 "bdev_lvol_resize", 00:05:43.511 "bdev_lvol_decouple_parent", 00:05:43.511 "bdev_lvol_inflate", 00:05:43.511 "bdev_lvol_rename", 00:05:43.511 "bdev_lvol_clone_bdev", 00:05:43.511 "bdev_lvol_clone", 00:05:43.511 "bdev_lvol_snapshot", 00:05:43.511 "bdev_lvol_create", 00:05:43.511 "bdev_lvol_delete_lvstore", 00:05:43.511 "bdev_lvol_rename_lvstore", 00:05:43.511 "bdev_lvol_create_lvstore", 00:05:43.511 "bdev_raid_set_options", 00:05:43.511 "bdev_raid_remove_base_bdev", 00:05:43.511 "bdev_raid_add_base_bdev", 00:05:43.511 "bdev_raid_delete", 00:05:43.511 "bdev_raid_create", 00:05:43.511 "bdev_raid_get_bdevs", 00:05:43.511 "bdev_error_inject_error", 00:05:43.511 "bdev_error_delete", 00:05:43.511 "bdev_error_create", 00:05:43.511 "bdev_split_delete", 00:05:43.511 "bdev_split_create", 00:05:43.511 "bdev_delay_delete", 00:05:43.511 "bdev_delay_create", 00:05:43.511 "bdev_delay_update_latency", 00:05:43.511 "bdev_zone_block_delete", 00:05:43.511 "bdev_zone_block_create", 00:05:43.511 "blobfs_create", 00:05:43.511 "blobfs_detect", 00:05:43.511 "blobfs_set_cache_size", 00:05:43.511 "bdev_xnvme_delete", 00:05:43.511 "bdev_xnvme_create", 00:05:43.511 "bdev_aio_delete", 00:05:43.511 "bdev_aio_rescan", 00:05:43.511 "bdev_aio_create", 00:05:43.511 "bdev_ftl_set_property", 00:05:43.511 "bdev_ftl_get_properties", 00:05:43.511 "bdev_ftl_get_stats", 00:05:43.511 "bdev_ftl_unmap", 00:05:43.511 "bdev_ftl_unload", 00:05:43.511 "bdev_ftl_delete", 00:05:43.511 "bdev_ftl_load", 00:05:43.511 "bdev_ftl_create", 00:05:43.511 "bdev_virtio_attach_controller", 00:05:43.511 "bdev_virtio_scsi_get_devices", 00:05:43.511 "bdev_virtio_detach_controller", 00:05:43.511 "bdev_virtio_blk_set_hotplug", 00:05:43.511 "bdev_iscsi_delete", 00:05:43.511 "bdev_iscsi_create", 00:05:43.511 "bdev_iscsi_set_options", 00:05:43.511 "accel_error_inject_error", 00:05:43.511 "ioat_scan_accel_module", 00:05:43.511 "dsa_scan_accel_module", 00:05:43.511 "iaa_scan_accel_module", 00:05:43.511 "iscsi_set_options", 00:05:43.511 "iscsi_get_auth_groups", 00:05:43.511 "iscsi_auth_group_remove_secret", 00:05:43.511 "iscsi_auth_group_add_secret", 00:05:43.511 "iscsi_delete_auth_group", 00:05:43.511 "iscsi_create_auth_group", 00:05:43.511 "iscsi_set_discovery_auth", 00:05:43.511 "iscsi_get_options", 00:05:43.511 "iscsi_target_node_request_logout", 00:05:43.511 "iscsi_target_node_set_redirect", 00:05:43.511 "iscsi_target_node_set_auth", 00:05:43.511 "iscsi_target_node_add_lun", 00:05:43.511 "iscsi_get_connections", 00:05:43.511 "iscsi_portal_group_set_auth", 00:05:43.511 "iscsi_start_portal_group", 00:05:43.511 "iscsi_delete_portal_group", 00:05:43.511 "iscsi_create_portal_group", 00:05:43.511 "iscsi_get_portal_groups", 00:05:43.511 "iscsi_delete_target_node", 00:05:43.511 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.511 "iscsi_target_node_add_pg_ig_maps", 00:05:43.511 "iscsi_create_target_node", 00:05:43.511 "iscsi_get_target_nodes", 00:05:43.511 "iscsi_delete_initiator_group", 00:05:43.511 "iscsi_initiator_group_remove_initiators", 00:05:43.511 "iscsi_initiator_group_add_initiators", 00:05:43.511 "iscsi_create_initiator_group", 00:05:43.511 "iscsi_get_initiator_groups", 00:05:43.511 "nvmf_set_crdt", 00:05:43.511 "nvmf_set_config", 00:05:43.511 "nvmf_set_max_subsystems", 00:05:43.511 "nvmf_subsystem_get_listeners", 00:05:43.511 "nvmf_subsystem_get_qpairs", 00:05:43.511 "nvmf_subsystem_get_controllers", 00:05:43.511 "nvmf_get_stats", 00:05:43.511 "nvmf_get_transports", 00:05:43.511 "nvmf_create_transport", 00:05:43.511 "nvmf_get_targets", 00:05:43.511 "nvmf_delete_target", 00:05:43.511 "nvmf_create_target", 00:05:43.511 "nvmf_subsystem_allow_any_host", 00:05:43.511 "nvmf_subsystem_remove_host", 00:05:43.511 "nvmf_subsystem_add_host", 00:05:43.511 "nvmf_subsystem_remove_ns", 00:05:43.511 "nvmf_subsystem_add_ns", 00:05:43.511 "nvmf_subsystem_listener_set_ana_state", 00:05:43.511 "nvmf_discovery_get_referrals", 00:05:43.511 "nvmf_discovery_remove_referral", 00:05:43.511 "nvmf_discovery_add_referral", 00:05:43.511 "nvmf_subsystem_remove_listener", 00:05:43.511 "nvmf_subsystem_add_listener", 00:05:43.511 "nvmf_delete_subsystem", 00:05:43.511 "nvmf_create_subsystem", 00:05:43.511 "nvmf_get_subsystems", 00:05:43.511 "env_dpdk_get_mem_stats", 00:05:43.511 "nbd_get_disks", 00:05:43.511 "nbd_stop_disk", 00:05:43.511 "nbd_start_disk", 00:05:43.511 "ublk_recover_disk", 00:05:43.511 "ublk_get_disks", 00:05:43.511 "ublk_stop_disk", 00:05:43.511 "ublk_start_disk", 00:05:43.511 "ublk_destroy_target", 00:05:43.511 "ublk_create_target", 00:05:43.511 "virtio_blk_create_transport", 00:05:43.511 "virtio_blk_get_transports", 00:05:43.511 "vhost_controller_set_coalescing", 00:05:43.511 "vhost_get_controllers", 00:05:43.511 "vhost_delete_controller", 00:05:43.511 "vhost_create_blk_controller", 00:05:43.511 "vhost_scsi_controller_remove_target", 00:05:43.511 "vhost_scsi_controller_add_target", 00:05:43.511 "vhost_start_scsi_controller", 00:05:43.511 "vhost_create_scsi_controller", 00:05:43.511 "thread_set_cpumask", 00:05:43.512 "framework_get_scheduler", 00:05:43.512 "framework_set_scheduler", 00:05:43.512 "framework_get_reactors", 00:05:43.512 "thread_get_io_channels", 00:05:43.512 "thread_get_pollers", 00:05:43.512 "thread_get_stats", 00:05:43.512 "framework_monitor_context_switch", 00:05:43.512 "spdk_kill_instance", 00:05:43.512 "log_enable_timestamps", 00:05:43.512 "log_get_flags", 00:05:43.512 "log_clear_flag", 00:05:43.512 "log_set_flag", 00:05:43.512 "log_get_level", 00:05:43.512 "log_set_level", 00:05:43.512 "log_get_print_level", 00:05:43.512 "log_set_print_level", 00:05:43.512 "framework_enable_cpumask_locks", 00:05:43.512 "framework_disable_cpumask_locks", 00:05:43.512 "framework_wait_init", 00:05:43.512 "framework_start_init", 00:05:43.512 "scsi_get_devices", 00:05:43.512 "bdev_get_histogram", 00:05:43.512 "bdev_enable_histogram", 00:05:43.512 "bdev_set_qos_limit", 00:05:43.512 "bdev_set_qd_sampling_period", 00:05:43.512 "bdev_get_bdevs", 00:05:43.512 "bdev_reset_iostat", 00:05:43.512 "bdev_get_iostat", 00:05:43.512 "bdev_examine", 00:05:43.512 "bdev_wait_for_examine", 00:05:43.512 "bdev_set_options", 00:05:43.512 "notify_get_notifications", 00:05:43.512 "notify_get_types", 00:05:43.512 "accel_get_stats", 00:05:43.512 "accel_set_options", 00:05:43.512 "accel_set_driver", 00:05:43.512 "accel_crypto_key_destroy", 00:05:43.512 "accel_crypto_keys_get", 00:05:43.512 "accel_crypto_key_create", 00:05:43.512 "accel_assign_opc", 00:05:43.512 "accel_get_module_info", 00:05:43.512 "accel_get_opc_assignments", 00:05:43.512 "vmd_rescan", 00:05:43.512 "vmd_remove_device", 00:05:43.512 "vmd_enable", 00:05:43.512 "sock_set_default_impl", 00:05:43.512 "sock_impl_set_options", 00:05:43.512 "sock_impl_get_options", 00:05:43.512 "iobuf_get_stats", 00:05:43.512 "iobuf_set_options", 00:05:43.512 "framework_get_pci_devices", 00:05:43.512 "framework_get_config", 00:05:43.512 "framework_get_subsystems", 00:05:43.512 "trace_get_info", 00:05:43.512 "trace_get_tpoint_group_mask", 00:05:43.512 "trace_disable_tpoint_group", 00:05:43.512 "trace_enable_tpoint_group", 00:05:43.512 "trace_clear_tpoint_mask", 00:05:43.512 "trace_set_tpoint_mask", 00:05:43.512 "spdk_get_version", 00:05:43.512 "rpc_get_methods" 00:05:43.512 ] 00:05:43.512 14:52:06 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.512 14:52:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.512 14:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.512 14:52:06 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.512 14:52:06 -- spdkcli/tcp.sh@38 -- # killprocess 68985 00:05:43.512 14:52:06 -- common/autotest_common.sh@936 -- # '[' -z 68985 ']' 00:05:43.512 14:52:06 -- common/autotest_common.sh@940 -- # kill -0 68985 00:05:43.512 14:52:06 -- common/autotest_common.sh@941 -- # uname 00:05:43.512 14:52:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.512 14:52:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68985 00:05:43.512 killing process with pid 68985 00:05:43.512 14:52:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.512 14:52:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.512 14:52:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68985' 00:05:43.512 14:52:07 -- common/autotest_common.sh@955 -- # kill 68985 00:05:43.512 14:52:07 -- common/autotest_common.sh@960 -- # wait 68985 00:05:43.771 ************************************ 00:05:43.771 END TEST spdkcli_tcp 00:05:43.771 ************************************ 00:05:43.771 00:05:43.771 real 0m1.543s 00:05:43.771 user 0m2.701s 00:05:43.771 sys 0m0.390s 00:05:43.771 14:52:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.771 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.771 14:52:07 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.771 14:52:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.771 14:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.771 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.771 ************************************ 00:05:43.771 START TEST dpdk_mem_utility 00:05:43.771 ************************************ 00:05:43.771 14:52:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.029 * Looking for test storage... 00:05:44.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:44.029 14:52:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.029 14:52:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.029 14:52:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.029 14:52:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.029 14:52:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.029 14:52:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.029 14:52:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.029 14:52:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.029 14:52:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.029 14:52:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.029 14:52:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.029 14:52:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.029 14:52:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.029 14:52:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.029 14:52:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.029 14:52:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.029 14:52:07 -- scripts/common.sh@344 -- # : 1 00:05:44.029 14:52:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.029 14:52:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.029 14:52:07 -- scripts/common.sh@364 -- # decimal 1 00:05:44.029 14:52:07 -- scripts/common.sh@352 -- # local d=1 00:05:44.029 14:52:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.029 14:52:07 -- scripts/common.sh@354 -- # echo 1 00:05:44.029 14:52:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.029 14:52:07 -- scripts/common.sh@365 -- # decimal 2 00:05:44.029 14:52:07 -- scripts/common.sh@352 -- # local d=2 00:05:44.029 14:52:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.029 14:52:07 -- scripts/common.sh@354 -- # echo 2 00:05:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.029 14:52:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.029 14:52:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.029 14:52:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.029 14:52:07 -- scripts/common.sh@367 -- # return 0 00:05:44.029 14:52:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.029 14:52:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.029 --rc genhtml_branch_coverage=1 00:05:44.029 --rc genhtml_function_coverage=1 00:05:44.029 --rc genhtml_legend=1 00:05:44.029 --rc geninfo_all_blocks=1 00:05:44.029 --rc geninfo_unexecuted_blocks=1 00:05:44.029 00:05:44.029 ' 00:05:44.029 14:52:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.029 --rc genhtml_branch_coverage=1 00:05:44.029 --rc genhtml_function_coverage=1 00:05:44.029 --rc genhtml_legend=1 00:05:44.029 --rc geninfo_all_blocks=1 00:05:44.029 --rc geninfo_unexecuted_blocks=1 00:05:44.029 00:05:44.029 ' 00:05:44.029 14:52:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.029 --rc genhtml_branch_coverage=1 00:05:44.029 --rc genhtml_function_coverage=1 00:05:44.029 --rc genhtml_legend=1 00:05:44.029 --rc geninfo_all_blocks=1 00:05:44.029 --rc geninfo_unexecuted_blocks=1 00:05:44.029 00:05:44.029 ' 00:05:44.029 14:52:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.029 --rc genhtml_branch_coverage=1 00:05:44.029 --rc genhtml_function_coverage=1 00:05:44.029 --rc genhtml_legend=1 00:05:44.029 --rc geninfo_all_blocks=1 00:05:44.029 --rc geninfo_unexecuted_blocks=1 00:05:44.029 00:05:44.029 ' 00:05:44.029 14:52:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.029 14:52:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69084 00:05:44.029 14:52:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69084 00:05:44.029 14:52:07 -- common/autotest_common.sh@829 -- # '[' -z 69084 ']' 00:05:44.029 14:52:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.029 14:52:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.029 14:52:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.029 14:52:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.029 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 14:52:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.029 [2024-11-18 14:52:07.528586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.030 [2024-11-18 14:52:07.529157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69084 ] 00:05:44.305 [2024-11-18 14:52:07.675184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.305 [2024-11-18 14:52:07.705919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.305 [2024-11-18 14:52:07.706100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.878 14:52:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.878 14:52:08 -- common/autotest_common.sh@862 -- # return 0 00:05:44.878 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.878 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.878 14:52:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.878 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:44.878 { 00:05:44.878 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.879 } 00:05:44.879 14:52:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.879 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.879 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.879 1 heaps totaling size 814.000000 MiB 00:05:44.879 size: 814.000000 MiB heap id: 0 00:05:44.879 end heaps---------- 00:05:44.879 8 mempools totaling size 598.116089 MiB 00:05:44.879 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.879 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.879 size: 84.521057 MiB name: bdev_io_69084 00:05:44.879 size: 51.011292 MiB name: evtpool_69084 00:05:44.879 size: 50.003479 MiB name: msgpool_69084 00:05:44.879 size: 21.763794 MiB name: PDU_Pool 00:05:44.879 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.879 size: 0.026123 MiB name: Session_Pool 00:05:44.879 end mempools------- 00:05:44.879 6 memzones totaling size 4.142822 MiB 00:05:44.879 size: 1.000366 MiB name: RG_ring_0_69084 00:05:44.879 size: 1.000366 MiB name: RG_ring_1_69084 00:05:44.879 size: 1.000366 MiB name: RG_ring_4_69084 00:05:44.879 size: 1.000366 MiB name: RG_ring_5_69084 00:05:44.879 size: 0.125366 MiB name: RG_ring_2_69084 00:05:44.879 size: 0.015991 MiB name: RG_ring_3_69084 00:05:44.879 end memzones------- 00:05:44.879 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.879 heap id: 0 total size: 814.000000 MiB number of busy elements: 312 number of free elements: 15 00:05:44.879 list of free elements. size: 12.469727 MiB 00:05:44.879 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.879 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.879 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.879 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.879 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.879 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.879 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.879 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.879 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:44.879 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:05:44.879 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:44.879 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:44.879 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.879 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:44.879 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:44.879 list of standard malloc elements. size: 199.267700 MiB 00:05:44.879 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.879 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.879 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.879 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.879 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.879 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.879 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.879 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.879 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.879 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:44.879 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.880 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:44.880 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.881 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.881 list of memzone associated elements. size: 602.262573 MiB 00:05:44.881 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.881 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.881 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.881 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.881 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.881 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_69084_0 00:05:44.881 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.881 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69084_0 00:05:44.881 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.881 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69084_0 00:05:44.881 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.881 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.881 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.881 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.881 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.881 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69084 00:05:44.881 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.881 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69084 00:05:44.881 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.881 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69084 00:05:44.881 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.881 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.881 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.881 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.881 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.881 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.881 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69084 00:05:44.881 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.881 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69084 00:05:44.881 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.881 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69084 00:05:44.881 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.881 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69084 00:05:44.881 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.881 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69084 00:05:44.881 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.881 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.881 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.881 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.881 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.881 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.881 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.881 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69084 00:05:44.881 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.881 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.881 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:44.881 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.881 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.881 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69084 00:05:44.881 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:44.881 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.881 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:44.881 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69084 00:05:44.881 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.881 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69084 00:05:44.881 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:44.881 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.881 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.881 14:52:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69084 00:05:44.881 14:52:08 -- common/autotest_common.sh@936 -- # '[' -z 69084 ']' 00:05:44.881 14:52:08 -- common/autotest_common.sh@940 -- # kill -0 69084 00:05:44.881 14:52:08 -- common/autotest_common.sh@941 -- # uname 00:05:44.881 14:52:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.881 14:52:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69084 00:05:45.140 14:52:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.140 14:52:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.140 14:52:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69084' 00:05:45.140 killing process with pid 69084 00:05:45.140 14:52:08 -- common/autotest_common.sh@955 -- # kill 69084 00:05:45.140 14:52:08 -- common/autotest_common.sh@960 -- # wait 69084 00:05:45.140 00:05:45.140 real 0m1.386s 00:05:45.140 user 0m1.413s 00:05:45.140 sys 0m0.352s 00:05:45.140 14:52:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.140 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.140 ************************************ 00:05:45.140 END TEST dpdk_mem_utility 00:05:45.140 ************************************ 00:05:45.398 14:52:08 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.398 14:52:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.398 14:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.398 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 ************************************ 00:05:45.398 START TEST event 00:05:45.398 ************************************ 00:05:45.398 14:52:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.398 * Looking for test storage... 00:05:45.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.398 14:52:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:45.398 14:52:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:45.398 14:52:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:45.399 14:52:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:45.399 14:52:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:45.399 14:52:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:45.399 14:52:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:45.399 14:52:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:45.399 14:52:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.399 14:52:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:45.399 14:52:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:45.399 14:52:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:45.399 14:52:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:45.399 14:52:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:45.399 14:52:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:45.399 14:52:08 -- scripts/common.sh@344 -- # : 1 00:05:45.399 14:52:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:45.399 14:52:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.399 14:52:08 -- scripts/common.sh@364 -- # decimal 1 00:05:45.399 14:52:08 -- scripts/common.sh@352 -- # local d=1 00:05:45.399 14:52:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.399 14:52:08 -- scripts/common.sh@354 -- # echo 1 00:05:45.399 14:52:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:45.399 14:52:08 -- scripts/common.sh@365 -- # decimal 2 00:05:45.399 14:52:08 -- scripts/common.sh@352 -- # local d=2 00:05:45.399 14:52:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.399 14:52:08 -- scripts/common.sh@354 -- # echo 2 00:05:45.399 14:52:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:45.399 14:52:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:45.399 14:52:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:45.399 14:52:08 -- scripts/common.sh@367 -- # return 0 00:05:45.399 14:52:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.399 --rc genhtml_branch_coverage=1 00:05:45.399 --rc genhtml_function_coverage=1 00:05:45.399 --rc genhtml_legend=1 00:05:45.399 --rc geninfo_all_blocks=1 00:05:45.399 --rc geninfo_unexecuted_blocks=1 00:05:45.399 00:05:45.399 ' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.399 --rc genhtml_branch_coverage=1 00:05:45.399 --rc genhtml_function_coverage=1 00:05:45.399 --rc genhtml_legend=1 00:05:45.399 --rc geninfo_all_blocks=1 00:05:45.399 --rc geninfo_unexecuted_blocks=1 00:05:45.399 00:05:45.399 ' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.399 --rc genhtml_branch_coverage=1 00:05:45.399 --rc genhtml_function_coverage=1 00:05:45.399 --rc genhtml_legend=1 00:05:45.399 --rc geninfo_all_blocks=1 00:05:45.399 --rc geninfo_unexecuted_blocks=1 00:05:45.399 00:05:45.399 ' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.399 --rc genhtml_branch_coverage=1 00:05:45.399 --rc genhtml_function_coverage=1 00:05:45.399 --rc genhtml_legend=1 00:05:45.399 --rc geninfo_all_blocks=1 00:05:45.399 --rc geninfo_unexecuted_blocks=1 00:05:45.399 00:05:45.399 ' 00:05:45.399 14:52:08 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.399 14:52:08 -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.399 14:52:08 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.399 14:52:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:45.399 14:52:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.399 14:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.399 ************************************ 00:05:45.399 START TEST event_perf 00:05:45.399 ************************************ 00:05:45.399 14:52:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.399 Running I/O for 1 seconds...[2024-11-18 14:52:08.951495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.399 [2024-11-18 14:52:08.951681] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69158 ] 00:05:45.657 [2024-11-18 14:52:09.099023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.657 [2024-11-18 14:52:09.133939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.657 [2024-11-18 14:52:09.134287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.657 [2024-11-18 14:52:09.134361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.657 Running I/O for 1 seconds...[2024-11-18 14:52:09.134309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.591 00:05:46.591 lcore 0: 197348 00:05:46.591 lcore 1: 197348 00:05:46.591 lcore 2: 197347 00:05:46.591 lcore 3: 197348 00:05:46.849 done. 00:05:46.849 00:05:46.849 real 0m1.282s 00:05:46.849 user 0m4.070s 00:05:46.849 sys 0m0.096s 00:05:46.849 14:52:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.849 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:46.849 ************************************ 00:05:46.849 END TEST event_perf 00:05:46.849 ************************************ 00:05:46.849 14:52:10 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.849 14:52:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:46.849 14:52:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.849 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:46.849 ************************************ 00:05:46.849 START TEST event_reactor 00:05:46.849 ************************************ 00:05:46.849 14:52:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.849 [2024-11-18 14:52:10.273640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.849 [2024-11-18 14:52:10.273765] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69197 ] 00:05:46.849 [2024-11-18 14:52:10.423212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.107 [2024-11-18 14:52:10.455833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.041 test_start 00:05:48.041 oneshot 00:05:48.041 tick 100 00:05:48.041 tick 100 00:05:48.041 tick 250 00:05:48.041 tick 100 00:05:48.041 tick 100 00:05:48.041 tick 100 00:05:48.041 tick 250 00:05:48.041 tick 500 00:05:48.041 tick 100 00:05:48.041 tick 100 00:05:48.041 tick 250 00:05:48.041 tick 100 00:05:48.041 tick 100 00:05:48.041 test_end 00:05:48.041 ************************************ 00:05:48.041 END TEST event_reactor 00:05:48.041 ************************************ 00:05:48.042 00:05:48.042 real 0m1.265s 00:05:48.042 user 0m1.075s 00:05:48.042 sys 0m0.082s 00:05:48.042 14:52:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.042 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.042 14:52:11 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.042 14:52:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:48.042 14:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.042 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.042 ************************************ 00:05:48.042 START TEST event_reactor_perf 00:05:48.042 ************************************ 00:05:48.042 14:52:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.042 [2024-11-18 14:52:11.583855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.042 [2024-11-18 14:52:11.583964] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69234 ] 00:05:48.300 [2024-11-18 14:52:11.726294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.300 [2024-11-18 14:52:11.757533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.234 test_start 00:05:49.234 test_end 00:05:49.234 Performance: 312967 events per second 00:05:49.234 00:05:49.234 real 0m1.259s 00:05:49.234 user 0m1.087s 00:05:49.234 sys 0m0.065s 00:05:49.234 14:52:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.234 ************************************ 00:05:49.234 END TEST event_reactor_perf 00:05:49.234 ************************************ 00:05:49.234 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.493 14:52:12 -- event/event.sh@49 -- # uname -s 00:05:49.493 14:52:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:49.493 14:52:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:49.493 14:52:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.493 14:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.493 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.493 ************************************ 00:05:49.493 START TEST event_scheduler 00:05:49.493 ************************************ 00:05:49.493 14:52:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:49.493 * Looking for test storage... 00:05:49.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:49.493 14:52:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.493 14:52:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:49.493 14:52:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.493 14:52:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:49.493 14:52:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:49.493 14:52:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:49.493 14:52:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:49.493 14:52:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:49.493 14:52:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:49.493 14:52:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.493 14:52:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:49.493 14:52:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:49.493 14:52:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:49.493 14:52:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:49.493 14:52:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:49.493 14:52:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:49.493 14:52:12 -- scripts/common.sh@344 -- # : 1 00:05:49.493 14:52:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:49.493 14:52:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.493 14:52:12 -- scripts/common.sh@364 -- # decimal 1 00:05:49.493 14:52:12 -- scripts/common.sh@352 -- # local d=1 00:05:49.493 14:52:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.493 14:52:12 -- scripts/common.sh@354 -- # echo 1 00:05:49.493 14:52:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:49.493 14:52:13 -- scripts/common.sh@365 -- # decimal 2 00:05:49.493 14:52:13 -- scripts/common.sh@352 -- # local d=2 00:05:49.493 14:52:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.493 14:52:13 -- scripts/common.sh@354 -- # echo 2 00:05:49.493 14:52:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:49.493 14:52:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:49.493 14:52:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:49.493 14:52:13 -- scripts/common.sh@367 -- # return 0 00:05:49.493 14:52:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.493 14:52:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:49.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.493 --rc genhtml_branch_coverage=1 00:05:49.493 --rc genhtml_function_coverage=1 00:05:49.493 --rc genhtml_legend=1 00:05:49.493 --rc geninfo_all_blocks=1 00:05:49.493 --rc geninfo_unexecuted_blocks=1 00:05:49.493 00:05:49.493 ' 00:05:49.493 14:52:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:49.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.493 --rc genhtml_branch_coverage=1 00:05:49.493 --rc genhtml_function_coverage=1 00:05:49.493 --rc genhtml_legend=1 00:05:49.493 --rc geninfo_all_blocks=1 00:05:49.493 --rc geninfo_unexecuted_blocks=1 00:05:49.493 00:05:49.493 ' 00:05:49.493 14:52:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:49.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.493 --rc genhtml_branch_coverage=1 00:05:49.493 --rc genhtml_function_coverage=1 00:05:49.493 --rc genhtml_legend=1 00:05:49.493 --rc geninfo_all_blocks=1 00:05:49.493 --rc geninfo_unexecuted_blocks=1 00:05:49.493 00:05:49.493 ' 00:05:49.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.493 14:52:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:49.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.493 --rc genhtml_branch_coverage=1 00:05:49.493 --rc genhtml_function_coverage=1 00:05:49.493 --rc genhtml_legend=1 00:05:49.493 --rc geninfo_all_blocks=1 00:05:49.493 --rc geninfo_unexecuted_blocks=1 00:05:49.493 00:05:49.493 ' 00:05:49.493 14:52:13 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:49.493 14:52:13 -- scheduler/scheduler.sh@35 -- # scheduler_pid=69298 00:05:49.493 14:52:13 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.493 14:52:13 -- scheduler/scheduler.sh@37 -- # waitforlisten 69298 00:05:49.493 14:52:13 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:49.493 14:52:13 -- common/autotest_common.sh@829 -- # '[' -z 69298 ']' 00:05:49.493 14:52:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.493 14:52:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.493 14:52:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.493 14:52:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.493 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:49.493 [2024-11-18 14:52:13.065133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.493 [2024-11-18 14:52:13.065381] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69298 ] 00:05:49.752 [2024-11-18 14:52:13.205237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.752 [2024-11-18 14:52:13.238367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.752 [2024-11-18 14:52:13.238840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.752 [2024-11-18 14:52:13.238994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.752 [2024-11-18 14:52:13.239029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.321 14:52:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.321 14:52:13 -- common/autotest_common.sh@862 -- # return 0 00:05:50.321 14:52:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.321 14:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.321 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.321 POWER: Env isn't set yet! 00:05:50.321 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:50.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:50.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:50.321 POWER: Attempting to initialise PSTAT power management... 00:05:50.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:50.321 POWER: Cannot set governor of lcore 0 to performance 00:05:50.321 POWER: Attempting to initialise AMD PSTATE power management... 00:05:50.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:50.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:50.321 POWER: Attempting to initialise CPPC power management... 00:05:50.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:50.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:50.321 POWER: Attempting to initialise VM power management... 00:05:50.321 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:50.321 POWER: Unable to set Power Management Environment for lcore 0 00:05:50.321 [2024-11-18 14:52:13.892197] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:50.321 [2024-11-18 14:52:13.892224] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:50.321 [2024-11-18 14:52:13.892243] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:50.321 [2024-11-18 14:52:13.892260] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.321 [2024-11-18 14:52:13.892267] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.321 [2024-11-18 14:52:13.892277] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.321 14:52:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.321 14:52:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.321 14:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.321 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.579 [2024-11-18 14:52:13.962099] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.579 14:52:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.579 14:52:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.580 14:52:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.580 14:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.580 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 ************************************ 00:05:50.580 START TEST scheduler_create_thread 00:05:50.580 ************************************ 00:05:50.580 14:52:13 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:50.580 14:52:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:50.580 14:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 2 00:05:50.580 14:52:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:50.580 14:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 3 00:05:50.580 14:52:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:50.580 14:52:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 4 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 5 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 6 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 7 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 8 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 9 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 10 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.580 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.580 14:52:14 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.580 14:52:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.580 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.146 ************************************ 00:05:51.146 END TEST scheduler_create_thread 00:05:51.146 ************************************ 00:05:51.146 14:52:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.146 00:05:51.146 real 0m0.593s 00:05:51.146 user 0m0.012s 00:05:51.146 sys 0m0.005s 00:05:51.146 14:52:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.146 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.146 14:52:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.146 14:52:14 -- scheduler/scheduler.sh@46 -- # killprocess 69298 00:05:51.146 14:52:14 -- common/autotest_common.sh@936 -- # '[' -z 69298 ']' 00:05:51.146 14:52:14 -- common/autotest_common.sh@940 -- # kill -0 69298 00:05:51.146 14:52:14 -- common/autotest_common.sh@941 -- # uname 00:05:51.146 14:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.146 14:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69298 00:05:51.146 killing process with pid 69298 00:05:51.146 14:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:51.146 14:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:51.146 14:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69298' 00:05:51.146 14:52:14 -- common/autotest_common.sh@955 -- # kill 69298 00:05:51.146 14:52:14 -- common/autotest_common.sh@960 -- # wait 69298 00:05:51.717 [2024-11-18 14:52:15.043967] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:51.717 00:05:51.717 real 0m2.330s 00:05:51.717 user 0m4.500s 00:05:51.717 sys 0m0.301s 00:05:51.717 14:52:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.717 ************************************ 00:05:51.717 END TEST event_scheduler 00:05:51.717 ************************************ 00:05:51.717 14:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:51.717 14:52:15 -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.717 14:52:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.717 14:52:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.717 14:52:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.717 14:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:51.717 ************************************ 00:05:51.717 START TEST app_repeat 00:05:51.717 ************************************ 00:05:51.717 14:52:15 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:51.717 14:52:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.717 14:52:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.717 14:52:15 -- event/event.sh@13 -- # local nbd_list 00:05:51.717 14:52:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.717 14:52:15 -- event/event.sh@14 -- # local bdev_list 00:05:51.717 14:52:15 -- event/event.sh@15 -- # local repeat_times=4 00:05:51.717 14:52:15 -- event/event.sh@17 -- # modprobe nbd 00:05:51.718 Process app_repeat pid: 69371 00:05:51.718 spdk_app_start Round 0 00:05:51.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.718 14:52:15 -- event/event.sh@19 -- # repeat_pid=69371 00:05:51.718 14:52:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.718 14:52:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69371' 00:05:51.718 14:52:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:51.718 14:52:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.718 14:52:15 -- event/event.sh@25 -- # waitforlisten 69371 /var/tmp/spdk-nbd.sock 00:05:51.718 14:52:15 -- common/autotest_common.sh@829 -- # '[' -z 69371 ']' 00:05:51.718 14:52:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.718 14:52:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.718 14:52:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.718 14:52:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.718 14:52:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.718 14:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:51.718 [2024-11-18 14:52:15.284364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.718 [2024-11-18 14:52:15.284465] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69371 ] 00:05:51.979 [2024-11-18 14:52:15.425704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.979 [2024-11-18 14:52:15.457604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.979 [2024-11-18 14:52:15.457722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.922 14:52:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.922 14:52:16 -- common/autotest_common.sh@862 -- # return 0 00:05:52.922 14:52:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.922 Malloc0 00:05:52.922 14:52:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.183 Malloc1 00:05:53.183 14:52:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.183 /dev/nbd0 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.183 14:52:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.183 14:52:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:53.183 14:52:16 -- common/autotest_common.sh@867 -- # local i 00:05:53.183 14:52:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.183 14:52:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.183 14:52:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:53.183 14:52:16 -- common/autotest_common.sh@871 -- # break 00:05:53.183 14:52:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.183 14:52:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.183 14:52:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.183 1+0 records in 00:05:53.183 1+0 records out 00:05:53.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340124 s, 12.0 MB/s 00:05:53.183 14:52:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.445 14:52:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:53.445 14:52:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.445 14:52:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.445 14:52:16 -- common/autotest_common.sh@887 -- # return 0 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.445 /dev/nbd1 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.445 14:52:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:53.445 14:52:16 -- common/autotest_common.sh@867 -- # local i 00:05:53.445 14:52:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.445 14:52:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.445 14:52:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:53.445 14:52:16 -- common/autotest_common.sh@871 -- # break 00:05:53.445 14:52:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.445 14:52:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.445 14:52:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.445 1+0 records in 00:05:53.445 1+0 records out 00:05:53.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039217 s, 10.4 MB/s 00:05:53.445 14:52:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.445 14:52:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:53.445 14:52:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.445 14:52:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.445 14:52:16 -- common/autotest_common.sh@887 -- # return 0 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.445 14:52:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.748 14:52:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.748 { 00:05:53.748 "nbd_device": "/dev/nbd0", 00:05:53.748 "bdev_name": "Malloc0" 00:05:53.748 }, 00:05:53.748 { 00:05:53.748 "nbd_device": "/dev/nbd1", 00:05:53.748 "bdev_name": "Malloc1" 00:05:53.748 } 00:05:53.748 ]' 00:05:53.748 14:52:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.748 { 00:05:53.748 "nbd_device": "/dev/nbd0", 00:05:53.748 "bdev_name": "Malloc0" 00:05:53.748 }, 00:05:53.748 { 00:05:53.748 "nbd_device": "/dev/nbd1", 00:05:53.748 "bdev_name": "Malloc1" 00:05:53.748 } 00:05:53.748 ]' 00:05:53.748 14:52:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.748 14:52:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.748 /dev/nbd1' 00:05:53.748 14:52:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.748 /dev/nbd1' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.749 256+0 records in 00:05:53.749 256+0 records out 00:05:53.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651452 s, 161 MB/s 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.749 256+0 records in 00:05:53.749 256+0 records out 00:05:53.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184413 s, 56.9 MB/s 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.749 256+0 records in 00:05:53.749 256+0 records out 00:05:53.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182045 s, 57.6 MB/s 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.749 14:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@41 -- # break 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.008 14:52:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@41 -- # break 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.267 14:52:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@65 -- # true 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.526 14:52:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.526 14:52:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.785 14:52:18 -- event/event.sh@35 -- # sleep 3 00:05:54.785 [2024-11-18 14:52:18.220177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.785 [2024-11-18 14:52:18.250698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.785 [2024-11-18 14:52:18.250802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.785 [2024-11-18 14:52:18.281514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.785 [2024-11-18 14:52:18.281727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.069 spdk_app_start Round 1 00:05:58.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.069 14:52:21 -- event/event.sh@23 -- # for i in {0..2} 00:05:58.069 14:52:21 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.069 14:52:21 -- event/event.sh@25 -- # waitforlisten 69371 /var/tmp/spdk-nbd.sock 00:05:58.069 14:52:21 -- common/autotest_common.sh@829 -- # '[' -z 69371 ']' 00:05:58.069 14:52:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.069 14:52:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.069 14:52:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.069 14:52:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.069 14:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.069 14:52:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.069 14:52:21 -- common/autotest_common.sh@862 -- # return 0 00:05:58.069 14:52:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.069 Malloc0 00:05:58.069 14:52:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.327 Malloc1 00:05:58.327 14:52:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@12 -- # local i 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.327 14:52:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.327 /dev/nbd0 00:05:58.586 14:52:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.586 14:52:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.586 14:52:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.586 14:52:21 -- common/autotest_common.sh@867 -- # local i 00:05:58.586 14:52:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.586 14:52:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.586 14:52:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.586 14:52:21 -- common/autotest_common.sh@871 -- # break 00:05:58.586 14:52:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.586 14:52:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.586 14:52:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.586 1+0 records in 00:05:58.586 1+0 records out 00:05:58.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050118 s, 8.2 MB/s 00:05:58.586 14:52:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.586 14:52:21 -- common/autotest_common.sh@884 -- # size=4096 00:05:58.586 14:52:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.586 14:52:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.586 14:52:21 -- common/autotest_common.sh@887 -- # return 0 00:05:58.586 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.586 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.586 14:52:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.586 /dev/nbd1 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.586 14:52:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.586 14:52:22 -- common/autotest_common.sh@867 -- # local i 00:05:58.586 14:52:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.586 14:52:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.586 14:52:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.586 14:52:22 -- common/autotest_common.sh@871 -- # break 00:05:58.586 14:52:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.586 14:52:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.586 14:52:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.586 1+0 records in 00:05:58.586 1+0 records out 00:05:58.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267132 s, 15.3 MB/s 00:05:58.586 14:52:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.586 14:52:22 -- common/autotest_common.sh@884 -- # size=4096 00:05:58.586 14:52:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.586 14:52:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.586 14:52:22 -- common/autotest_common.sh@887 -- # return 0 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.586 14:52:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.844 { 00:05:58.844 "nbd_device": "/dev/nbd0", 00:05:58.844 "bdev_name": "Malloc0" 00:05:58.844 }, 00:05:58.844 { 00:05:58.844 "nbd_device": "/dev/nbd1", 00:05:58.844 "bdev_name": "Malloc1" 00:05:58.844 } 00:05:58.844 ]' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.844 { 00:05:58.844 "nbd_device": "/dev/nbd0", 00:05:58.844 "bdev_name": "Malloc0" 00:05:58.844 }, 00:05:58.844 { 00:05:58.844 "nbd_device": "/dev/nbd1", 00:05:58.844 "bdev_name": "Malloc1" 00:05:58.844 } 00:05:58.844 ]' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.844 /dev/nbd1' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.844 /dev/nbd1' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.844 256+0 records in 00:05:58.844 256+0 records out 00:05:58.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066223 s, 158 MB/s 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.844 14:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.844 256+0 records in 00:05:58.844 256+0 records out 00:05:58.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151732 s, 69.1 MB/s 00:05:58.845 14:52:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.845 14:52:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.105 256+0 records in 00:05:59.105 256+0 records out 00:05:59.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200482 s, 52.3 MB/s 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@51 -- # local i 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@41 -- # break 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.105 14:52:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@41 -- # break 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.364 14:52:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@65 -- # true 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.622 14:52:23 -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.622 14:52:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.880 14:52:23 -- event/event.sh@35 -- # sleep 3 00:05:59.880 [2024-11-18 14:52:23.375410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.880 [2024-11-18 14:52:23.404285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.880 [2024-11-18 14:52:23.404294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.880 [2024-11-18 14:52:23.434457] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.880 [2024-11-18 14:52:23.434509] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.164 spdk_app_start Round 2 00:06:03.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.164 14:52:26 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.164 14:52:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.164 14:52:26 -- event/event.sh@25 -- # waitforlisten 69371 /var/tmp/spdk-nbd.sock 00:06:03.164 14:52:26 -- common/autotest_common.sh@829 -- # '[' -z 69371 ']' 00:06:03.164 14:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.164 14:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.164 14:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.164 14:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.164 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:03.164 14:52:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.164 14:52:26 -- common/autotest_common.sh@862 -- # return 0 00:06:03.164 14:52:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.164 Malloc0 00:06:03.164 14:52:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.458 Malloc1 00:06:03.458 14:52:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.458 14:52:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.726 /dev/nbd0 00:06:03.726 14:52:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.726 14:52:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.726 14:52:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:03.726 14:52:27 -- common/autotest_common.sh@867 -- # local i 00:06:03.726 14:52:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.726 14:52:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.726 14:52:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:03.726 14:52:27 -- common/autotest_common.sh@871 -- # break 00:06:03.726 14:52:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.726 14:52:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.726 14:52:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.726 1+0 records in 00:06:03.726 1+0 records out 00:06:03.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038152 s, 10.7 MB/s 00:06:03.726 14:52:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.726 14:52:27 -- common/autotest_common.sh@884 -- # size=4096 00:06:03.726 14:52:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.726 14:52:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.726 14:52:27 -- common/autotest_common.sh@887 -- # return 0 00:06:03.726 14:52:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.726 14:52:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.726 14:52:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.984 /dev/nbd1 00:06:03.984 14:52:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.984 14:52:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.984 14:52:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:03.984 14:52:27 -- common/autotest_common.sh@867 -- # local i 00:06:03.984 14:52:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.984 14:52:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.984 14:52:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:03.984 14:52:27 -- common/autotest_common.sh@871 -- # break 00:06:03.984 14:52:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.984 14:52:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.985 14:52:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.985 1+0 records in 00:06:03.985 1+0 records out 00:06:03.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225426 s, 18.2 MB/s 00:06:03.985 14:52:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.985 14:52:27 -- common/autotest_common.sh@884 -- # size=4096 00:06:03.985 14:52:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.985 14:52:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.985 14:52:27 -- common/autotest_common.sh@887 -- # return 0 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.985 { 00:06:03.985 "nbd_device": "/dev/nbd0", 00:06:03.985 "bdev_name": "Malloc0" 00:06:03.985 }, 00:06:03.985 { 00:06:03.985 "nbd_device": "/dev/nbd1", 00:06:03.985 "bdev_name": "Malloc1" 00:06:03.985 } 00:06:03.985 ]' 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.985 { 00:06:03.985 "nbd_device": "/dev/nbd0", 00:06:03.985 "bdev_name": "Malloc0" 00:06:03.985 }, 00:06:03.985 { 00:06:03.985 "nbd_device": "/dev/nbd1", 00:06:03.985 "bdev_name": "Malloc1" 00:06:03.985 } 00:06:03.985 ]' 00:06:03.985 14:52:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.243 /dev/nbd1' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.243 /dev/nbd1' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.243 256+0 records in 00:06:04.243 256+0 records out 00:06:04.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513134 s, 204 MB/s 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.243 256+0 records in 00:06:04.243 256+0 records out 00:06:04.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134772 s, 77.8 MB/s 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.243 256+0 records in 00:06:04.243 256+0 records out 00:06:04.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161792 s, 64.8 MB/s 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.243 14:52:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@51 -- # local i 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.244 14:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@41 -- # break 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.502 14:52:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@41 -- # break 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.502 14:52:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@65 -- # true 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.760 14:52:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.760 14:52:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.018 14:52:28 -- event/event.sh@35 -- # sleep 3 00:06:05.018 [2024-11-18 14:52:28.571588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.018 [2024-11-18 14:52:28.601687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.018 [2024-11-18 14:52:28.601691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.277 [2024-11-18 14:52:28.631054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.277 [2024-11-18 14:52:28.631096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.559 14:52:31 -- event/event.sh@38 -- # waitforlisten 69371 /var/tmp/spdk-nbd.sock 00:06:08.559 14:52:31 -- common/autotest_common.sh@829 -- # '[' -z 69371 ']' 00:06:08.559 14:52:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.559 14:52:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.559 14:52:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.559 14:52:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.559 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.559 14:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.559 14:52:31 -- common/autotest_common.sh@862 -- # return 0 00:06:08.560 14:52:31 -- event/event.sh@39 -- # killprocess 69371 00:06:08.560 14:52:31 -- common/autotest_common.sh@936 -- # '[' -z 69371 ']' 00:06:08.560 14:52:31 -- common/autotest_common.sh@940 -- # kill -0 69371 00:06:08.560 14:52:31 -- common/autotest_common.sh@941 -- # uname 00:06:08.560 14:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.560 14:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69371 00:06:08.560 14:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.560 14:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.560 14:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69371' 00:06:08.560 killing process with pid 69371 00:06:08.560 14:52:31 -- common/autotest_common.sh@955 -- # kill 69371 00:06:08.560 14:52:31 -- common/autotest_common.sh@960 -- # wait 69371 00:06:08.560 spdk_app_start is called in Round 0. 00:06:08.560 Shutdown signal received, stop current app iteration 00:06:08.560 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:08.560 spdk_app_start is called in Round 1. 00:06:08.560 Shutdown signal received, stop current app iteration 00:06:08.560 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:08.560 spdk_app_start is called in Round 2. 00:06:08.560 Shutdown signal received, stop current app iteration 00:06:08.560 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:08.560 spdk_app_start is called in Round 3. 00:06:08.560 Shutdown signal received, stop current app iteration 00:06:08.560 ************************************ 00:06:08.560 END TEST app_repeat 00:06:08.560 ************************************ 00:06:08.560 14:52:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.560 14:52:31 -- event/event.sh@42 -- # return 0 00:06:08.560 00:06:08.560 real 0m16.580s 00:06:08.560 user 0m36.848s 00:06:08.560 sys 0m2.018s 00:06:08.560 14:52:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.560 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.560 14:52:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.560 14:52:31 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.560 14:52:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.560 14:52:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.560 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.560 ************************************ 00:06:08.560 START TEST cpu_locks 00:06:08.560 ************************************ 00:06:08.560 14:52:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.560 * Looking for test storage... 00:06:08.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.560 14:52:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.560 14:52:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.560 14:52:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.560 14:52:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.560 14:52:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.560 14:52:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.560 14:52:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.560 14:52:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.560 14:52:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.560 14:52:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.560 14:52:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.560 14:52:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.560 14:52:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.560 14:52:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.560 14:52:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.560 14:52:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.560 14:52:32 -- scripts/common.sh@344 -- # : 1 00:06:08.560 14:52:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.560 14:52:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.560 14:52:32 -- scripts/common.sh@364 -- # decimal 1 00:06:08.560 14:52:32 -- scripts/common.sh@352 -- # local d=1 00:06:08.560 14:52:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.560 14:52:32 -- scripts/common.sh@354 -- # echo 1 00:06:08.560 14:52:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.560 14:52:32 -- scripts/common.sh@365 -- # decimal 2 00:06:08.560 14:52:32 -- scripts/common.sh@352 -- # local d=2 00:06:08.560 14:52:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.560 14:52:32 -- scripts/common.sh@354 -- # echo 2 00:06:08.560 14:52:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.560 14:52:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.560 14:52:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.560 14:52:32 -- scripts/common.sh@367 -- # return 0 00:06:08.560 14:52:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.560 14:52:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.560 --rc genhtml_branch_coverage=1 00:06:08.560 --rc genhtml_function_coverage=1 00:06:08.560 --rc genhtml_legend=1 00:06:08.560 --rc geninfo_all_blocks=1 00:06:08.560 --rc geninfo_unexecuted_blocks=1 00:06:08.560 00:06:08.560 ' 00:06:08.560 14:52:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.560 --rc genhtml_branch_coverage=1 00:06:08.560 --rc genhtml_function_coverage=1 00:06:08.560 --rc genhtml_legend=1 00:06:08.560 --rc geninfo_all_blocks=1 00:06:08.560 --rc geninfo_unexecuted_blocks=1 00:06:08.560 00:06:08.560 ' 00:06:08.560 14:52:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.560 --rc genhtml_branch_coverage=1 00:06:08.560 --rc genhtml_function_coverage=1 00:06:08.560 --rc genhtml_legend=1 00:06:08.560 --rc geninfo_all_blocks=1 00:06:08.560 --rc geninfo_unexecuted_blocks=1 00:06:08.560 00:06:08.560 ' 00:06:08.560 14:52:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.560 --rc genhtml_branch_coverage=1 00:06:08.560 --rc genhtml_function_coverage=1 00:06:08.560 --rc genhtml_legend=1 00:06:08.560 --rc geninfo_all_blocks=1 00:06:08.560 --rc geninfo_unexecuted_blocks=1 00:06:08.560 00:06:08.560 ' 00:06:08.560 14:52:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.560 14:52:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.560 14:52:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.560 14:52:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.560 14:52:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.560 14:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.560 14:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:08.560 ************************************ 00:06:08.560 START TEST default_locks 00:06:08.560 ************************************ 00:06:08.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.560 14:52:32 -- common/autotest_common.sh@1114 -- # default_locks 00:06:08.560 14:52:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69795 00:06:08.560 14:52:32 -- event/cpu_locks.sh@47 -- # waitforlisten 69795 00:06:08.560 14:52:32 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.560 14:52:32 -- common/autotest_common.sh@829 -- # '[' -z 69795 ']' 00:06:08.560 14:52:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.560 14:52:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.560 14:52:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.560 14:52:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.560 14:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:08.560 [2024-11-18 14:52:32.080576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.560 [2024-11-18 14:52:32.080671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69795 ] 00:06:08.819 [2024-11-18 14:52:32.221384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.819 [2024-11-18 14:52:32.251765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.819 [2024-11-18 14:52:32.251927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.386 14:52:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.386 14:52:32 -- common/autotest_common.sh@862 -- # return 0 00:06:09.386 14:52:32 -- event/cpu_locks.sh@49 -- # locks_exist 69795 00:06:09.386 14:52:32 -- event/cpu_locks.sh@22 -- # lslocks -p 69795 00:06:09.386 14:52:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.645 14:52:33 -- event/cpu_locks.sh@50 -- # killprocess 69795 00:06:09.645 14:52:33 -- common/autotest_common.sh@936 -- # '[' -z 69795 ']' 00:06:09.645 14:52:33 -- common/autotest_common.sh@940 -- # kill -0 69795 00:06:09.645 14:52:33 -- common/autotest_common.sh@941 -- # uname 00:06:09.645 14:52:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.645 14:52:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69795 00:06:09.645 killing process with pid 69795 00:06:09.645 14:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.645 14:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.645 14:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69795' 00:06:09.645 14:52:33 -- common/autotest_common.sh@955 -- # kill 69795 00:06:09.645 14:52:33 -- common/autotest_common.sh@960 -- # wait 69795 00:06:09.904 14:52:33 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69795 00:06:09.904 14:52:33 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.904 14:52:33 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69795 00:06:09.904 14:52:33 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.904 14:52:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.904 14:52:33 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.904 14:52:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.904 14:52:33 -- common/autotest_common.sh@653 -- # waitforlisten 69795 00:06:09.904 14:52:33 -- common/autotest_common.sh@829 -- # '[' -z 69795 ']' 00:06:09.904 14:52:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.904 ERROR: process (pid: 69795) is no longer running 00:06:09.904 14:52:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.904 14:52:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.904 14:52:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.904 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69795) - No such process 00:06:09.904 14:52:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.904 14:52:33 -- common/autotest_common.sh@862 -- # return 1 00:06:09.904 14:52:33 -- common/autotest_common.sh@653 -- # es=1 00:06:09.904 14:52:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.904 14:52:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.904 14:52:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.904 ************************************ 00:06:09.904 END TEST default_locks 00:06:09.904 ************************************ 00:06:09.904 14:52:33 -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.904 14:52:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.904 14:52:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.904 14:52:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.904 00:06:09.904 real 0m1.324s 00:06:09.904 user 0m1.367s 00:06:09.904 sys 0m0.378s 00:06:09.904 14:52:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.904 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.904 14:52:33 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.904 14:52:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.904 14:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.904 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.904 ************************************ 00:06:09.904 START TEST default_locks_via_rpc 00:06:09.904 ************************************ 00:06:09.904 14:52:33 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:09.905 14:52:33 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69837 00:06:09.905 14:52:33 -- event/cpu_locks.sh@63 -- # waitforlisten 69837 00:06:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.905 14:52:33 -- common/autotest_common.sh@829 -- # '[' -z 69837 ']' 00:06:09.905 14:52:33 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.905 14:52:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.905 14:52:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.905 14:52:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.905 14:52:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.905 14:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 [2024-11-18 14:52:33.460361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.905 [2024-11-18 14:52:33.461075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69837 ] 00:06:10.163 [2024-11-18 14:52:33.611286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.163 [2024-11-18 14:52:33.641409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.163 [2024-11-18 14:52:33.641592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.730 14:52:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.730 14:52:34 -- common/autotest_common.sh@862 -- # return 0 00:06:10.730 14:52:34 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.730 14:52:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.730 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.730 14:52:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.730 14:52:34 -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.730 14:52:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.730 14:52:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.730 14:52:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.730 14:52:34 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.730 14:52:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.730 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.730 14:52:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.730 14:52:34 -- event/cpu_locks.sh@71 -- # locks_exist 69837 00:06:10.730 14:52:34 -- event/cpu_locks.sh@22 -- # lslocks -p 69837 00:06:10.730 14:52:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.988 14:52:34 -- event/cpu_locks.sh@73 -- # killprocess 69837 00:06:10.988 14:52:34 -- common/autotest_common.sh@936 -- # '[' -z 69837 ']' 00:06:10.988 14:52:34 -- common/autotest_common.sh@940 -- # kill -0 69837 00:06:10.988 14:52:34 -- common/autotest_common.sh@941 -- # uname 00:06:10.988 14:52:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.988 14:52:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69837 00:06:10.988 killing process with pid 69837 00:06:10.988 14:52:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.988 14:52:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.988 14:52:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69837' 00:06:10.988 14:52:34 -- common/autotest_common.sh@955 -- # kill 69837 00:06:10.988 14:52:34 -- common/autotest_common.sh@960 -- # wait 69837 00:06:11.247 ************************************ 00:06:11.247 END TEST default_locks_via_rpc 00:06:11.247 ************************************ 00:06:11.247 00:06:11.247 real 0m1.359s 00:06:11.247 user 0m1.379s 00:06:11.247 sys 0m0.404s 00:06:11.247 14:52:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.247 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 14:52:34 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.247 14:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.247 14:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.247 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 ************************************ 00:06:11.247 START TEST non_locking_app_on_locked_coremask 00:06:11.247 ************************************ 00:06:11.247 14:52:34 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:11.247 14:52:34 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69878 00:06:11.247 14:52:34 -- event/cpu_locks.sh@81 -- # waitforlisten 69878 /var/tmp/spdk.sock 00:06:11.247 14:52:34 -- common/autotest_common.sh@829 -- # '[' -z 69878 ']' 00:06:11.247 14:52:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.247 14:52:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.247 14:52:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.247 14:52:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.247 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:11.247 14:52:34 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.506 [2024-11-18 14:52:34.856886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.506 [2024-11-18 14:52:34.857173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69878 ] 00:06:11.506 [2024-11-18 14:52:35.023171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.506 [2024-11-18 14:52:35.063970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.506 [2024-11-18 14:52:35.064249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.440 14:52:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.440 14:52:35 -- common/autotest_common.sh@862 -- # return 0 00:06:12.440 14:52:35 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.440 14:52:35 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69894 00:06:12.440 14:52:35 -- event/cpu_locks.sh@85 -- # waitforlisten 69894 /var/tmp/spdk2.sock 00:06:12.440 14:52:35 -- common/autotest_common.sh@829 -- # '[' -z 69894 ']' 00:06:12.440 14:52:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.440 14:52:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.440 14:52:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.440 14:52:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.440 14:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:12.440 [2024-11-18 14:52:35.729012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.440 [2024-11-18 14:52:35.729290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69894 ] 00:06:12.440 [2024-11-18 14:52:35.875271] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.440 [2024-11-18 14:52:35.875322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.440 [2024-11-18 14:52:35.939358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.440 [2024-11-18 14:52:35.939518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.083 14:52:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.083 14:52:36 -- common/autotest_common.sh@862 -- # return 0 00:06:13.083 14:52:36 -- event/cpu_locks.sh@87 -- # locks_exist 69878 00:06:13.083 14:52:36 -- event/cpu_locks.sh@22 -- # lslocks -p 69878 00:06:13.083 14:52:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.355 14:52:36 -- event/cpu_locks.sh@89 -- # killprocess 69878 00:06:13.355 14:52:36 -- common/autotest_common.sh@936 -- # '[' -z 69878 ']' 00:06:13.355 14:52:36 -- common/autotest_common.sh@940 -- # kill -0 69878 00:06:13.355 14:52:36 -- common/autotest_common.sh@941 -- # uname 00:06:13.355 14:52:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.355 14:52:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69878 00:06:13.355 killing process with pid 69878 00:06:13.355 14:52:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.355 14:52:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.355 14:52:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69878' 00:06:13.355 14:52:36 -- common/autotest_common.sh@955 -- # kill 69878 00:06:13.355 14:52:36 -- common/autotest_common.sh@960 -- # wait 69878 00:06:13.922 14:52:37 -- event/cpu_locks.sh@90 -- # killprocess 69894 00:06:13.922 14:52:37 -- common/autotest_common.sh@936 -- # '[' -z 69894 ']' 00:06:13.922 14:52:37 -- common/autotest_common.sh@940 -- # kill -0 69894 00:06:13.922 14:52:37 -- common/autotest_common.sh@941 -- # uname 00:06:13.922 14:52:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.922 14:52:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69894 00:06:13.922 killing process with pid 69894 00:06:13.922 14:52:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.922 14:52:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.922 14:52:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69894' 00:06:13.922 14:52:37 -- common/autotest_common.sh@955 -- # kill 69894 00:06:13.922 14:52:37 -- common/autotest_common.sh@960 -- # wait 69894 00:06:14.489 00:06:14.489 real 0m2.989s 00:06:14.489 user 0m3.232s 00:06:14.489 sys 0m0.731s 00:06:14.489 14:52:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.489 ************************************ 00:06:14.489 END TEST non_locking_app_on_locked_coremask 00:06:14.489 ************************************ 00:06:14.489 14:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:14.489 14:52:37 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.489 14:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.489 14:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.489 14:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:14.489 ************************************ 00:06:14.489 START TEST locking_app_on_unlocked_coremask 00:06:14.489 ************************************ 00:06:14.490 14:52:37 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:14.490 14:52:37 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69952 00:06:14.490 14:52:37 -- event/cpu_locks.sh@99 -- # waitforlisten 69952 /var/tmp/spdk.sock 00:06:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.490 14:52:37 -- common/autotest_common.sh@829 -- # '[' -z 69952 ']' 00:06:14.490 14:52:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.490 14:52:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.490 14:52:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.490 14:52:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.490 14:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:14.490 14:52:37 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.490 [2024-11-18 14:52:37.889073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.490 [2024-11-18 14:52:37.889194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69952 ] 00:06:14.490 [2024-11-18 14:52:38.033682] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.490 [2024-11-18 14:52:38.033740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.490 [2024-11-18 14:52:38.075230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.748 [2024-11-18 14:52:38.075708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.316 14:52:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.316 14:52:38 -- common/autotest_common.sh@862 -- # return 0 00:06:15.316 14:52:38 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69968 00:06:15.316 14:52:38 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.316 14:52:38 -- event/cpu_locks.sh@103 -- # waitforlisten 69968 /var/tmp/spdk2.sock 00:06:15.316 14:52:38 -- common/autotest_common.sh@829 -- # '[' -z 69968 ']' 00:06:15.316 14:52:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.316 14:52:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.316 14:52:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.316 14:52:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.316 14:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.316 [2024-11-18 14:52:38.769819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.316 [2024-11-18 14:52:38.770086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69968 ] 00:06:15.574 [2024-11-18 14:52:38.922504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.574 [2024-11-18 14:52:39.008683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.574 [2024-11-18 14:52:39.008902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.142 14:52:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.142 14:52:39 -- common/autotest_common.sh@862 -- # return 0 00:06:16.142 14:52:39 -- event/cpu_locks.sh@105 -- # locks_exist 69968 00:06:16.142 14:52:39 -- event/cpu_locks.sh@22 -- # lslocks -p 69968 00:06:16.142 14:52:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.401 14:52:39 -- event/cpu_locks.sh@107 -- # killprocess 69952 00:06:16.401 14:52:39 -- common/autotest_common.sh@936 -- # '[' -z 69952 ']' 00:06:16.401 14:52:39 -- common/autotest_common.sh@940 -- # kill -0 69952 00:06:16.401 14:52:39 -- common/autotest_common.sh@941 -- # uname 00:06:16.401 14:52:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.401 14:52:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69952 00:06:16.401 killing process with pid 69952 00:06:16.401 14:52:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.401 14:52:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.401 14:52:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69952' 00:06:16.401 14:52:39 -- common/autotest_common.sh@955 -- # kill 69952 00:06:16.401 14:52:39 -- common/autotest_common.sh@960 -- # wait 69952 00:06:16.967 14:52:40 -- event/cpu_locks.sh@108 -- # killprocess 69968 00:06:16.967 14:52:40 -- common/autotest_common.sh@936 -- # '[' -z 69968 ']' 00:06:16.967 14:52:40 -- common/autotest_common.sh@940 -- # kill -0 69968 00:06:16.967 14:52:40 -- common/autotest_common.sh@941 -- # uname 00:06:16.967 14:52:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.967 14:52:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69968 00:06:16.967 killing process with pid 69968 00:06:16.967 14:52:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.967 14:52:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.967 14:52:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69968' 00:06:16.967 14:52:40 -- common/autotest_common.sh@955 -- # kill 69968 00:06:16.967 14:52:40 -- common/autotest_common.sh@960 -- # wait 69968 00:06:17.225 00:06:17.225 real 0m2.985s 00:06:17.225 user 0m3.165s 00:06:17.225 sys 0m0.844s 00:06:17.225 14:52:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.225 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:17.225 ************************************ 00:06:17.225 END TEST locking_app_on_unlocked_coremask 00:06:17.225 ************************************ 00:06:17.483 14:52:40 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.483 14:52:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.483 14:52:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.483 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:17.483 ************************************ 00:06:17.483 START TEST locking_app_on_locked_coremask 00:06:17.483 ************************************ 00:06:17.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.483 14:52:40 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:17.483 14:52:40 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70026 00:06:17.483 14:52:40 -- event/cpu_locks.sh@116 -- # waitforlisten 70026 /var/tmp/spdk.sock 00:06:17.483 14:52:40 -- common/autotest_common.sh@829 -- # '[' -z 70026 ']' 00:06:17.483 14:52:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.483 14:52:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.483 14:52:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.483 14:52:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.483 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:17.483 14:52:40 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.483 [2024-11-18 14:52:40.913393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.483 [2024-11-18 14:52:40.913512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:06:17.483 [2024-11-18 14:52:41.061069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.741 [2024-11-18 14:52:41.100284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.741 [2024-11-18 14:52:41.100484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.307 14:52:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.307 14:52:41 -- common/autotest_common.sh@862 -- # return 0 00:06:18.307 14:52:41 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.307 14:52:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70042 00:06:18.307 14:52:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70042 /var/tmp/spdk2.sock 00:06:18.307 14:52:41 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.307 14:52:41 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70042 /var/tmp/spdk2.sock 00:06:18.307 14:52:41 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.307 14:52:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.307 14:52:41 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.307 14:52:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.307 14:52:41 -- common/autotest_common.sh@653 -- # waitforlisten 70042 /var/tmp/spdk2.sock 00:06:18.307 14:52:41 -- common/autotest_common.sh@829 -- # '[' -z 70042 ']' 00:06:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.307 14:52:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.307 14:52:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.307 14:52:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.307 14:52:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.307 14:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:18.307 [2024-11-18 14:52:41.784652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.307 [2024-11-18 14:52:41.784772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70042 ] 00:06:18.566 [2024-11-18 14:52:41.930735] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70026 has claimed it. 00:06:18.566 [2024-11-18 14:52:41.930802] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.133 ERROR: process (pid: 70042) is no longer running 00:06:19.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70042) - No such process 00:06:19.133 14:52:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.133 14:52:42 -- common/autotest_common.sh@862 -- # return 1 00:06:19.133 14:52:42 -- common/autotest_common.sh@653 -- # es=1 00:06:19.134 14:52:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.134 14:52:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.134 14:52:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.134 14:52:42 -- event/cpu_locks.sh@122 -- # locks_exist 70026 00:06:19.134 14:52:42 -- event/cpu_locks.sh@22 -- # lslocks -p 70026 00:06:19.134 14:52:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.134 14:52:42 -- event/cpu_locks.sh@124 -- # killprocess 70026 00:06:19.134 14:52:42 -- common/autotest_common.sh@936 -- # '[' -z 70026 ']' 00:06:19.134 14:52:42 -- common/autotest_common.sh@940 -- # kill -0 70026 00:06:19.134 14:52:42 -- common/autotest_common.sh@941 -- # uname 00:06:19.134 14:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.134 14:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70026 00:06:19.392 killing process with pid 70026 00:06:19.392 14:52:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.392 14:52:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.392 14:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70026' 00:06:19.392 14:52:42 -- common/autotest_common.sh@955 -- # kill 70026 00:06:19.392 14:52:42 -- common/autotest_common.sh@960 -- # wait 70026 00:06:19.651 00:06:19.651 real 0m2.172s 00:06:19.651 user 0m2.362s 00:06:19.651 sys 0m0.563s 00:06:19.651 14:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.651 ************************************ 00:06:19.651 END TEST locking_app_on_locked_coremask 00:06:19.651 ************************************ 00:06:19.651 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.651 14:52:43 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.651 14:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.651 14:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.651 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.651 ************************************ 00:06:19.651 START TEST locking_overlapped_coremask 00:06:19.651 ************************************ 00:06:19.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.651 14:52:43 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:19.651 14:52:43 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70090 00:06:19.651 14:52:43 -- event/cpu_locks.sh@133 -- # waitforlisten 70090 /var/tmp/spdk.sock 00:06:19.651 14:52:43 -- common/autotest_common.sh@829 -- # '[' -z 70090 ']' 00:06:19.651 14:52:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.651 14:52:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.651 14:52:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.651 14:52:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.651 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.651 14:52:43 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.651 [2024-11-18 14:52:43.122238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.651 [2024-11-18 14:52:43.122400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70090 ] 00:06:19.909 [2024-11-18 14:52:43.271439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.909 [2024-11-18 14:52:43.315641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.909 [2024-11-18 14:52:43.316116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.909 [2024-11-18 14:52:43.317083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.909 [2024-11-18 14:52:43.317159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.475 14:52:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.475 14:52:43 -- common/autotest_common.sh@862 -- # return 0 00:06:20.475 14:52:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70102 00:06:20.475 14:52:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70102 /var/tmp/spdk2.sock 00:06:20.475 14:52:43 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.475 14:52:43 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70102 /var/tmp/spdk2.sock 00:06:20.475 14:52:43 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.475 14:52:43 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:20.475 14:52:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.475 14:52:43 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:20.475 14:52:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.475 14:52:43 -- common/autotest_common.sh@653 -- # waitforlisten 70102 /var/tmp/spdk2.sock 00:06:20.475 14:52:43 -- common/autotest_common.sh@829 -- # '[' -z 70102 ']' 00:06:20.475 14:52:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.475 14:52:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.475 14:52:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.476 14:52:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.476 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:20.476 [2024-11-18 14:52:44.030141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.476 [2024-11-18 14:52:44.030825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70102 ] 00:06:20.734 [2024-11-18 14:52:44.184012] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70090 has claimed it. 00:06:20.734 [2024-11-18 14:52:44.184088] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.327 ERROR: process (pid: 70102) is no longer running 00:06:21.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70102) - No such process 00:06:21.327 14:52:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.327 14:52:44 -- common/autotest_common.sh@862 -- # return 1 00:06:21.327 14:52:44 -- common/autotest_common.sh@653 -- # es=1 00:06:21.327 14:52:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.327 14:52:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.327 14:52:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.327 14:52:44 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.327 14:52:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.327 14:52:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.327 14:52:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.327 14:52:44 -- event/cpu_locks.sh@141 -- # killprocess 70090 00:06:21.327 14:52:44 -- common/autotest_common.sh@936 -- # '[' -z 70090 ']' 00:06:21.327 14:52:44 -- common/autotest_common.sh@940 -- # kill -0 70090 00:06:21.327 14:52:44 -- common/autotest_common.sh@941 -- # uname 00:06:21.327 14:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.327 14:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70090 00:06:21.327 14:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.327 14:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.327 14:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70090' 00:06:21.327 killing process with pid 70090 00:06:21.327 14:52:44 -- common/autotest_common.sh@955 -- # kill 70090 00:06:21.327 14:52:44 -- common/autotest_common.sh@960 -- # wait 70090 00:06:21.585 00:06:21.585 real 0m1.945s 00:06:21.585 user 0m5.236s 00:06:21.585 sys 0m0.453s 00:06:21.585 14:52:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.585 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.585 ************************************ 00:06:21.585 END TEST locking_overlapped_coremask 00:06:21.585 ************************************ 00:06:21.585 14:52:45 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.585 14:52:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.585 14:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.585 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.585 ************************************ 00:06:21.585 START TEST locking_overlapped_coremask_via_rpc 00:06:21.585 ************************************ 00:06:21.585 14:52:45 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:21.585 14:52:45 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70145 00:06:21.585 14:52:45 -- event/cpu_locks.sh@149 -- # waitforlisten 70145 /var/tmp/spdk.sock 00:06:21.585 14:52:45 -- common/autotest_common.sh@829 -- # '[' -z 70145 ']' 00:06:21.585 14:52:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.585 14:52:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.585 14:52:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.585 14:52:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.585 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.585 14:52:45 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.585 [2024-11-18 14:52:45.111409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.585 [2024-11-18 14:52:45.111528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:06:21.843 [2024-11-18 14:52:45.257486] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.843 [2024-11-18 14:52:45.257544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.843 [2024-11-18 14:52:45.299014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.843 [2024-11-18 14:52:45.299704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.843 [2024-11-18 14:52:45.299807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.843 [2024-11-18 14:52:45.299867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.409 14:52:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.409 14:52:45 -- common/autotest_common.sh@862 -- # return 0 00:06:22.409 14:52:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70162 00:06:22.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.409 14:52:45 -- event/cpu_locks.sh@153 -- # waitforlisten 70162 /var/tmp/spdk2.sock 00:06:22.409 14:52:45 -- common/autotest_common.sh@829 -- # '[' -z 70162 ']' 00:06:22.409 14:52:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.409 14:52:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.409 14:52:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.409 14:52:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.409 14:52:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.409 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.667 [2024-11-18 14:52:46.008961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.667 [2024-11-18 14:52:46.009818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70162 ] 00:06:22.667 [2024-11-18 14:52:46.166425] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.667 [2024-11-18 14:52:46.166496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.924 [2024-11-18 14:52:46.255477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.925 [2024-11-18 14:52:46.255819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.925 [2024-11-18 14:52:46.259405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.925 [2024-11-18 14:52:46.259480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.490 14:52:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.490 14:52:46 -- common/autotest_common.sh@862 -- # return 0 00:06:23.490 14:52:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.490 14:52:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.490 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.490 14:52:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.490 14:52:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.490 14:52:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:23.490 14:52:46 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.490 14:52:46 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:23.490 14:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.490 14:52:46 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:23.490 14:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.490 14:52:46 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.490 14:52:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.490 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.490 [2024-11-18 14:52:46.841537] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70145 has claimed it. 00:06:23.490 request: 00:06:23.490 { 00:06:23.490 "method": "framework_enable_cpumask_locks", 00:06:23.490 "req_id": 1 00:06:23.490 } 00:06:23.490 Got JSON-RPC error response 00:06:23.490 response: 00:06:23.490 { 00:06:23.490 "code": -32603, 00:06:23.490 "message": "Failed to claim CPU core: 2" 00:06:23.490 } 00:06:23.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.490 14:52:46 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:23.490 14:52:46 -- common/autotest_common.sh@653 -- # es=1 00:06:23.490 14:52:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.490 14:52:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.490 14:52:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.490 14:52:46 -- event/cpu_locks.sh@158 -- # waitforlisten 70145 /var/tmp/spdk.sock 00:06:23.490 14:52:46 -- common/autotest_common.sh@829 -- # '[' -z 70145 ']' 00:06:23.490 14:52:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.490 14:52:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.490 14:52:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.490 14:52:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.490 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.490 14:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.490 14:52:47 -- common/autotest_common.sh@862 -- # return 0 00:06:23.490 14:52:47 -- event/cpu_locks.sh@159 -- # waitforlisten 70162 /var/tmp/spdk2.sock 00:06:23.490 14:52:47 -- common/autotest_common.sh@829 -- # '[' -z 70162 ']' 00:06:23.490 14:52:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.490 14:52:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.490 14:52:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.490 14:52:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.490 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.748 ************************************ 00:06:23.748 END TEST locking_overlapped_coremask_via_rpc 00:06:23.748 ************************************ 00:06:23.748 14:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.748 14:52:47 -- common/autotest_common.sh@862 -- # return 0 00:06:23.748 14:52:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.748 14:52:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.748 14:52:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.748 14:52:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.748 00:06:23.748 real 0m2.214s 00:06:23.748 user 0m1.005s 00:06:23.748 sys 0m0.133s 00:06:23.748 14:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.748 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.748 14:52:47 -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.748 14:52:47 -- event/cpu_locks.sh@15 -- # [[ -z 70145 ]] 00:06:23.748 14:52:47 -- event/cpu_locks.sh@15 -- # killprocess 70145 00:06:23.748 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70145 ']' 00:06:23.748 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70145 00:06:23.748 14:52:47 -- common/autotest_common.sh@941 -- # uname 00:06:23.748 14:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.748 14:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70145 00:06:23.748 killing process with pid 70145 00:06:23.748 14:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.748 14:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.748 14:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70145' 00:06:23.748 14:52:47 -- common/autotest_common.sh@955 -- # kill 70145 00:06:23.748 14:52:47 -- common/autotest_common.sh@960 -- # wait 70145 00:06:24.315 14:52:47 -- event/cpu_locks.sh@16 -- # [[ -z 70162 ]] 00:06:24.315 14:52:47 -- event/cpu_locks.sh@16 -- # killprocess 70162 00:06:24.315 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70162 ']' 00:06:24.315 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70162 00:06:24.315 14:52:47 -- common/autotest_common.sh@941 -- # uname 00:06:24.315 14:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.315 14:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70162 00:06:24.315 killing process with pid 70162 00:06:24.315 14:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:24.315 14:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:24.315 14:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70162' 00:06:24.315 14:52:47 -- common/autotest_common.sh@955 -- # kill 70162 00:06:24.315 14:52:47 -- common/autotest_common.sh@960 -- # wait 70162 00:06:24.574 14:52:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.574 14:52:47 -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.574 14:52:47 -- event/cpu_locks.sh@15 -- # [[ -z 70145 ]] 00:06:24.574 14:52:47 -- event/cpu_locks.sh@15 -- # killprocess 70145 00:06:24.574 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70145 ']' 00:06:24.574 Process with pid 70145 is not found 00:06:24.574 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70145 00:06:24.574 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70145) - No such process 00:06:24.574 14:52:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70145 is not found' 00:06:24.574 14:52:47 -- event/cpu_locks.sh@16 -- # [[ -z 70162 ]] 00:06:24.574 14:52:47 -- event/cpu_locks.sh@16 -- # killprocess 70162 00:06:24.574 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70162 ']' 00:06:24.574 Process with pid 70162 is not found 00:06:24.574 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70162 00:06:24.574 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70162) - No such process 00:06:24.574 14:52:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70162 is not found' 00:06:24.574 14:52:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.574 00:06:24.574 real 0m16.073s 00:06:24.574 user 0m28.033s 00:06:24.574 sys 0m4.280s 00:06:24.574 14:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.574 ************************************ 00:06:24.574 END TEST cpu_locks 00:06:24.574 ************************************ 00:06:24.574 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.574 ************************************ 00:06:24.574 END TEST event 00:06:24.574 ************************************ 00:06:24.574 00:06:24.574 real 0m39.222s 00:06:24.574 user 1m15.789s 00:06:24.574 sys 0m7.080s 00:06:24.574 14:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.574 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.574 14:52:48 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.574 14:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.574 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.574 ************************************ 00:06:24.574 START TEST thread 00:06:24.574 ************************************ 00:06:24.574 14:52:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.574 * Looking for test storage... 00:06:24.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.574 14:52:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:24.574 14:52:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:24.574 14:52:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:24.574 14:52:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:24.574 14:52:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:24.574 14:52:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:24.574 14:52:48 -- scripts/common.sh@335 -- # IFS=.-: 00:06:24.574 14:52:48 -- scripts/common.sh@335 -- # read -ra ver1 00:06:24.574 14:52:48 -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.574 14:52:48 -- scripts/common.sh@336 -- # read -ra ver2 00:06:24.574 14:52:48 -- scripts/common.sh@337 -- # local 'op=<' 00:06:24.574 14:52:48 -- scripts/common.sh@339 -- # ver1_l=2 00:06:24.574 14:52:48 -- scripts/common.sh@340 -- # ver2_l=1 00:06:24.574 14:52:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:24.574 14:52:48 -- scripts/common.sh@343 -- # case "$op" in 00:06:24.574 14:52:48 -- scripts/common.sh@344 -- # : 1 00:06:24.574 14:52:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:24.574 14:52:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.574 14:52:48 -- scripts/common.sh@364 -- # decimal 1 00:06:24.574 14:52:48 -- scripts/common.sh@352 -- # local d=1 00:06:24.574 14:52:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.574 14:52:48 -- scripts/common.sh@354 -- # echo 1 00:06:24.574 14:52:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:24.574 14:52:48 -- scripts/common.sh@365 -- # decimal 2 00:06:24.574 14:52:48 -- scripts/common.sh@352 -- # local d=2 00:06:24.574 14:52:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.574 14:52:48 -- scripts/common.sh@354 -- # echo 2 00:06:24.574 14:52:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:24.574 14:52:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:24.574 14:52:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:24.574 14:52:48 -- scripts/common.sh@367 -- # return 0 00:06:24.574 14:52:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:24.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.574 --rc genhtml_branch_coverage=1 00:06:24.574 --rc genhtml_function_coverage=1 00:06:24.574 --rc genhtml_legend=1 00:06:24.574 --rc geninfo_all_blocks=1 00:06:24.574 --rc geninfo_unexecuted_blocks=1 00:06:24.574 00:06:24.574 ' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:24.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.574 --rc genhtml_branch_coverage=1 00:06:24.574 --rc genhtml_function_coverage=1 00:06:24.574 --rc genhtml_legend=1 00:06:24.574 --rc geninfo_all_blocks=1 00:06:24.574 --rc geninfo_unexecuted_blocks=1 00:06:24.574 00:06:24.574 ' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:24.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.574 --rc genhtml_branch_coverage=1 00:06:24.574 --rc genhtml_function_coverage=1 00:06:24.574 --rc genhtml_legend=1 00:06:24.574 --rc geninfo_all_blocks=1 00:06:24.574 --rc geninfo_unexecuted_blocks=1 00:06:24.574 00:06:24.574 ' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:24.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.574 --rc genhtml_branch_coverage=1 00:06:24.574 --rc genhtml_function_coverage=1 00:06:24.574 --rc genhtml_legend=1 00:06:24.574 --rc geninfo_all_blocks=1 00:06:24.574 --rc geninfo_unexecuted_blocks=1 00:06:24.574 00:06:24.574 ' 00:06:24.574 14:52:48 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.574 14:52:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:24.574 14:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.574 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.833 ************************************ 00:06:24.833 START TEST thread_poller_perf 00:06:24.833 ************************************ 00:06:24.833 14:52:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.833 [2024-11-18 14:52:48.190881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.833 [2024-11-18 14:52:48.191144] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:06:24.833 [2024-11-18 14:52:48.336875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.833 [2024-11-18 14:52:48.376552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.833 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.211 [2024-11-18T14:52:49.801Z] ====================================== 00:06:26.211 [2024-11-18T14:52:49.801Z] busy:2612306604 (cyc) 00:06:26.211 [2024-11-18T14:52:49.801Z] total_run_count: 395000 00:06:26.211 [2024-11-18T14:52:49.801Z] tsc_hz: 2600000000 (cyc) 00:06:26.211 [2024-11-18T14:52:49.801Z] ====================================== 00:06:26.211 [2024-11-18T14:52:49.801Z] poller_cost: 6613 (cyc), 2543 (nsec) 00:06:26.211 00:06:26.211 real 0m1.287s 00:06:26.211 user 0m1.107s 00:06:26.211 sys 0m0.072s 00:06:26.211 14:52:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.211 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.211 ************************************ 00:06:26.211 END TEST thread_poller_perf 00:06:26.211 ************************************ 00:06:26.211 14:52:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.211 14:52:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:26.211 14:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.211 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.211 ************************************ 00:06:26.211 START TEST thread_poller_perf 00:06:26.211 ************************************ 00:06:26.211 14:52:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.211 [2024-11-18 14:52:49.519956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.211 [2024-11-18 14:52:49.520221] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70330 ] 00:06:26.211 [2024-11-18 14:52:49.661704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.211 [2024-11-18 14:52:49.713468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.211 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.586 [2024-11-18T14:52:51.176Z] ====================================== 00:06:27.586 [2024-11-18T14:52:51.176Z] busy:2603751528 (cyc) 00:06:27.586 [2024-11-18T14:52:51.176Z] total_run_count: 5334000 00:06:27.586 [2024-11-18T14:52:51.176Z] tsc_hz: 2600000000 (cyc) 00:06:27.586 [2024-11-18T14:52:51.176Z] ====================================== 00:06:27.586 [2024-11-18T14:52:51.176Z] poller_cost: 488 (cyc), 187 (nsec) 00:06:27.586 00:06:27.586 real 0m1.291s 00:06:27.586 user 0m1.114s 00:06:27.586 sys 0m0.070s 00:06:27.586 ************************************ 00:06:27.586 END TEST thread_poller_perf 00:06:27.586 ************************************ 00:06:27.586 14:52:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.586 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.586 14:52:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.586 00:06:27.586 real 0m2.799s 00:06:27.586 user 0m2.318s 00:06:27.586 sys 0m0.266s 00:06:27.586 ************************************ 00:06:27.586 END TEST thread 00:06:27.586 ************************************ 00:06:27.586 14:52:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.586 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.586 14:52:50 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:27.586 14:52:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.586 14:52:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.586 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.586 ************************************ 00:06:27.586 START TEST accel 00:06:27.586 ************************************ 00:06:27.586 14:52:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:27.586 * Looking for test storage... 00:06:27.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:27.586 14:52:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.586 14:52:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.586 14:52:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:27.586 14:52:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:27.586 14:52:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:27.586 14:52:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:27.586 14:52:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:27.586 14:52:50 -- scripts/common.sh@335 -- # IFS=.-: 00:06:27.586 14:52:50 -- scripts/common.sh@335 -- # read -ra ver1 00:06:27.586 14:52:50 -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.586 14:52:50 -- scripts/common.sh@336 -- # read -ra ver2 00:06:27.586 14:52:50 -- scripts/common.sh@337 -- # local 'op=<' 00:06:27.586 14:52:50 -- scripts/common.sh@339 -- # ver1_l=2 00:06:27.586 14:52:50 -- scripts/common.sh@340 -- # ver2_l=1 00:06:27.586 14:52:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:27.586 14:52:50 -- scripts/common.sh@343 -- # case "$op" in 00:06:27.586 14:52:50 -- scripts/common.sh@344 -- # : 1 00:06:27.586 14:52:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:27.586 14:52:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.586 14:52:50 -- scripts/common.sh@364 -- # decimal 1 00:06:27.586 14:52:50 -- scripts/common.sh@352 -- # local d=1 00:06:27.586 14:52:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.586 14:52:50 -- scripts/common.sh@354 -- # echo 1 00:06:27.586 14:52:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:27.586 14:52:50 -- scripts/common.sh@365 -- # decimal 2 00:06:27.586 14:52:50 -- scripts/common.sh@352 -- # local d=2 00:06:27.586 14:52:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.586 14:52:50 -- scripts/common.sh@354 -- # echo 2 00:06:27.586 14:52:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:27.587 14:52:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:27.587 14:52:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:27.587 14:52:50 -- scripts/common.sh@367 -- # return 0 00:06:27.587 14:52:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.587 14:52:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.587 --rc genhtml_branch_coverage=1 00:06:27.587 --rc genhtml_function_coverage=1 00:06:27.587 --rc genhtml_legend=1 00:06:27.587 --rc geninfo_all_blocks=1 00:06:27.587 --rc geninfo_unexecuted_blocks=1 00:06:27.587 00:06:27.587 ' 00:06:27.587 14:52:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.587 --rc genhtml_branch_coverage=1 00:06:27.587 --rc genhtml_function_coverage=1 00:06:27.587 --rc genhtml_legend=1 00:06:27.587 --rc geninfo_all_blocks=1 00:06:27.587 --rc geninfo_unexecuted_blocks=1 00:06:27.587 00:06:27.587 ' 00:06:27.587 14:52:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.587 --rc genhtml_branch_coverage=1 00:06:27.587 --rc genhtml_function_coverage=1 00:06:27.587 --rc genhtml_legend=1 00:06:27.587 --rc geninfo_all_blocks=1 00:06:27.587 --rc geninfo_unexecuted_blocks=1 00:06:27.587 00:06:27.587 ' 00:06:27.587 14:52:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:27.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.587 --rc genhtml_branch_coverage=1 00:06:27.587 --rc genhtml_function_coverage=1 00:06:27.587 --rc genhtml_legend=1 00:06:27.587 --rc geninfo_all_blocks=1 00:06:27.587 --rc geninfo_unexecuted_blocks=1 00:06:27.587 00:06:27.587 ' 00:06:27.587 14:52:50 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:27.587 14:52:50 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:27.587 14:52:50 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.587 14:52:50 -- accel/accel.sh@59 -- # spdk_tgt_pid=70407 00:06:27.587 14:52:50 -- accel/accel.sh@60 -- # waitforlisten 70407 00:06:27.587 14:52:50 -- common/autotest_common.sh@829 -- # '[' -z 70407 ']' 00:06:27.587 14:52:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.587 14:52:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.587 14:52:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.587 14:52:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.587 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.587 14:52:50 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:27.587 14:52:50 -- accel/accel.sh@58 -- # build_accel_config 00:06:27.587 14:52:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.587 14:52:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.587 14:52:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.587 14:52:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.587 14:52:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.587 14:52:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.587 14:52:50 -- accel/accel.sh@42 -- # jq -r . 00:06:27.587 [2024-11-18 14:52:51.064493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.587 [2024-11-18 14:52:51.064616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70407 ] 00:06:27.845 [2024-11-18 14:52:51.209252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.845 [2024-11-18 14:52:51.249721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.845 [2024-11-18 14:52:51.249914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.412 14:52:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.412 14:52:51 -- common/autotest_common.sh@862 -- # return 0 00:06:28.412 14:52:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:28.412 14:52:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:28.412 14:52:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.412 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.412 14:52:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:28.412 14:52:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # IFS== 00:06:28.412 14:52:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:28.412 14:52:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:28.412 14:52:51 -- accel/accel.sh@67 -- # killprocess 70407 00:06:28.412 14:52:51 -- common/autotest_common.sh@936 -- # '[' -z 70407 ']' 00:06:28.412 14:52:51 -- common/autotest_common.sh@940 -- # kill -0 70407 00:06:28.412 14:52:51 -- common/autotest_common.sh@941 -- # uname 00:06:28.412 14:52:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.412 14:52:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70407 00:06:28.412 killing process with pid 70407 00:06:28.412 14:52:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.412 14:52:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.412 14:52:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70407' 00:06:28.412 14:52:51 -- common/autotest_common.sh@955 -- # kill 70407 00:06:28.412 14:52:51 -- common/autotest_common.sh@960 -- # wait 70407 00:06:28.671 14:52:52 -- accel/accel.sh@68 -- # trap - ERR 00:06:28.671 14:52:52 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:28.671 14:52:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:28.671 14:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.671 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 14:52:52 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:28.671 14:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:28.671 14:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.671 14:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.671 14:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.671 14:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.671 14:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.671 14:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.671 14:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.671 14:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:28.948 14:52:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.948 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 14:52:52 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:28.948 14:52:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:28.948 14:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.948 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 ************************************ 00:06:28.948 START TEST accel_missing_filename 00:06:28.948 ************************************ 00:06:28.948 14:52:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:28.948 14:52:52 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.948 14:52:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:28.948 14:52:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:28.948 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.948 14:52:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:28.948 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.948 14:52:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:28.948 14:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:28.948 14:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.948 14:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.948 14:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.948 14:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.948 14:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.948 14:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.948 14:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.948 14:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:28.948 [2024-11-18 14:52:52.342045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.948 [2024-11-18 14:52:52.342151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70461 ] 00:06:28.948 [2024-11-18 14:52:52.485877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.948 [2024-11-18 14:52:52.526932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.206 [2024-11-18 14:52:52.568827] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.206 [2024-11-18 14:52:52.622059] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:29.206 A filename is required. 00:06:29.206 14:52:52 -- common/autotest_common.sh@653 -- # es=234 00:06:29.206 14:52:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.206 14:52:52 -- common/autotest_common.sh@662 -- # es=106 00:06:29.206 14:52:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.206 14:52:52 -- common/autotest_common.sh@670 -- # es=1 00:06:29.206 14:52:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.206 00:06:29.206 real 0m0.385s 00:06:29.206 user 0m0.190s 00:06:29.206 sys 0m0.122s 00:06:29.206 14:52:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.206 ************************************ 00:06:29.206 END TEST accel_missing_filename 00:06:29.206 ************************************ 00:06:29.206 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.206 14:52:52 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.206 14:52:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:29.206 14:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.206 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.206 ************************************ 00:06:29.206 START TEST accel_compress_verify 00:06:29.206 ************************************ 00:06:29.206 14:52:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.206 14:52:52 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.206 14:52:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.206 14:52:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:29.206 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.206 14:52:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:29.206 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.206 14:52:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.206 14:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.206 14:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.206 14:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.206 14:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.206 14:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.206 14:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.206 14:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.206 14:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.206 14:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:29.206 [2024-11-18 14:52:52.777743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.206 [2024-11-18 14:52:52.778044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:06:29.464 [2024-11-18 14:52:52.927289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.464 [2024-11-18 14:52:52.968009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.464 [2024-11-18 14:52:53.011273] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.723 [2024-11-18 14:52:53.067755] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:29.723 00:06:29.723 Compression does not support the verify option, aborting. 00:06:29.723 14:52:53 -- common/autotest_common.sh@653 -- # es=161 00:06:29.723 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.723 14:52:53 -- common/autotest_common.sh@662 -- # es=33 00:06:29.723 14:52:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.723 14:52:53 -- common/autotest_common.sh@670 -- # es=1 00:06:29.723 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.723 00:06:29.723 real 0m0.407s 00:06:29.723 user 0m0.200s 00:06:29.723 sys 0m0.131s 00:06:29.723 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.723 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 ************************************ 00:06:29.723 END TEST accel_compress_verify 00:06:29.723 ************************************ 00:06:29.723 14:52:53 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:29.723 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.723 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.723 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 ************************************ 00:06:29.723 START TEST accel_wrong_workload 00:06:29.723 ************************************ 00:06:29.723 14:52:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:29.723 14:52:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.723 14:52:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:29.723 14:52:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.723 14:52:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:29.723 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:29.723 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.723 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.723 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.723 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.723 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:29.723 Unsupported workload type: foobar 00:06:29.723 [2024-11-18 14:52:53.220389] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:29.723 accel_perf options: 00:06:29.723 [-h help message] 00:06:29.723 [-q queue depth per core] 00:06:29.723 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:29.723 [-T number of threads per core 00:06:29.723 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:29.723 [-t time in seconds] 00:06:29.723 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:29.723 [ dif_verify, , dif_generate, dif_generate_copy 00:06:29.723 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:29.723 [-l for compress/decompress workloads, name of uncompressed input file 00:06:29.723 [-S for crc32c workload, use this seed value (default 0) 00:06:29.723 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:29.723 [-f for fill workload, use this BYTE value (default 255) 00:06:29.723 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:29.723 [-y verify result if this switch is on] 00:06:29.723 [-a tasks to allocate per core (default: same value as -q)] 00:06:29.723 Can be used to spread operations across a wider range of memory. 00:06:29.723 14:52:53 -- common/autotest_common.sh@653 -- # es=1 00:06:29.723 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.723 14:52:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.723 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.723 00:06:29.723 real 0m0.051s 00:06:29.723 user 0m0.053s 00:06:29.723 sys 0m0.024s 00:06:29.723 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.723 ************************************ 00:06:29.723 END TEST accel_wrong_workload 00:06:29.723 ************************************ 00:06:29.723 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 14:52:53 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:29.723 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:29.723 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.723 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 ************************************ 00:06:29.723 START TEST accel_negative_buffers 00:06:29.723 ************************************ 00:06:29.723 14:52:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:29.723 14:52:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.723 14:52:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:29.723 14:52:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:29.723 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.723 14:52:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:29.723 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:29.723 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.723 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.723 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.723 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.723 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.723 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:29.982 -x option must be non-negative. 00:06:29.982 [2024-11-18 14:52:53.311851] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:29.982 accel_perf options: 00:06:29.982 [-h help message] 00:06:29.982 [-q queue depth per core] 00:06:29.982 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:29.982 [-T number of threads per core 00:06:29.982 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:29.982 [-t time in seconds] 00:06:29.982 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:29.982 [ dif_verify, , dif_generate, dif_generate_copy 00:06:29.982 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:29.982 [-l for compress/decompress workloads, name of uncompressed input file 00:06:29.982 [-S for crc32c workload, use this seed value (default 0) 00:06:29.982 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:29.982 [-f for fill workload, use this BYTE value (default 255) 00:06:29.982 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:29.982 [-y verify result if this switch is on] 00:06:29.982 [-a tasks to allocate per core (default: same value as -q)] 00:06:29.982 Can be used to spread operations across a wider range of memory. 00:06:29.982 ************************************ 00:06:29.982 END TEST accel_negative_buffers 00:06:29.982 ************************************ 00:06:29.982 14:52:53 -- common/autotest_common.sh@653 -- # es=1 00:06:29.982 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.982 14:52:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.982 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.982 00:06:29.982 real 0m0.042s 00:06:29.982 user 0m0.044s 00:06:29.982 sys 0m0.024s 00:06:29.982 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.982 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.982 14:52:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:29.982 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:29.982 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.982 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.982 ************************************ 00:06:29.982 START TEST accel_crc32c 00:06:29.982 ************************************ 00:06:29.982 14:52:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:29.982 14:52:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.982 14:52:53 -- accel/accel.sh@17 -- # local accel_module 00:06:29.982 14:52:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:29.982 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:29.982 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.982 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.982 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.982 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.982 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.982 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.982 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.982 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:29.982 [2024-11-18 14:52:53.398412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.982 [2024-11-18 14:52:53.398529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70548 ] 00:06:29.982 [2024-11-18 14:52:53.547257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.240 [2024-11-18 14:52:53.587651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.617 14:52:54 -- accel/accel.sh@18 -- # out=' 00:06:31.617 SPDK Configuration: 00:06:31.617 Core mask: 0x1 00:06:31.617 00:06:31.617 Accel Perf Configuration: 00:06:31.617 Workload Type: crc32c 00:06:31.617 CRC-32C seed: 32 00:06:31.617 Transfer size: 4096 bytes 00:06:31.617 Vector count 1 00:06:31.617 Module: software 00:06:31.617 Queue depth: 32 00:06:31.617 Allocate depth: 32 00:06:31.617 # threads/core: 1 00:06:31.617 Run time: 1 seconds 00:06:31.617 Verify: Yes 00:06:31.617 00:06:31.617 Running for 1 seconds... 00:06:31.617 00:06:31.617 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.617 ------------------------------------------------------------------------------------ 00:06:31.617 0,0 458432/s 1790 MiB/s 0 0 00:06:31.617 ==================================================================================== 00:06:31.617 Total 458432/s 1790 MiB/s 0 0' 00:06:31.617 14:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:31.617 14:52:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:31.617 14:52:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.617 14:52:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.617 14:52:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.617 14:52:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.617 14:52:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.617 14:52:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.617 14:52:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.617 14:52:54 -- accel/accel.sh@42 -- # jq -r . 00:06:31.617 [2024-11-18 14:52:54.808202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.617 [2024-11-18 14:52:54.808348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70568 ] 00:06:31.617 [2024-11-18 14:52:54.956082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.617 [2024-11-18 14:52:54.996588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=0x1 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=software 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=1 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val=Yes 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:31.617 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:31.617 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:31.617 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:32.992 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:32.992 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:32.992 14:52:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.992 14:52:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:32.992 14:52:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.992 00:06:32.992 real 0m2.810s 00:06:32.992 user 0m2.361s 00:06:32.992 sys 0m0.248s 00:06:32.992 14:52:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.992 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:32.992 ************************************ 00:06:32.992 END TEST accel_crc32c 00:06:32.992 ************************************ 00:06:32.992 14:52:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:32.992 14:52:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:32.992 14:52:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.992 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:32.992 ************************************ 00:06:32.992 START TEST accel_crc32c_C2 00:06:32.992 ************************************ 00:06:32.992 14:52:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:32.992 14:52:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.992 14:52:56 -- accel/accel.sh@17 -- # local accel_module 00:06:32.992 14:52:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.992 14:52:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.992 14:52:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.992 14:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.992 14:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.992 14:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.992 14:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.992 14:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.992 14:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.992 14:52:56 -- accel/accel.sh@42 -- # jq -r . 00:06:32.992 [2024-11-18 14:52:56.250012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.992 [2024-11-18 14:52:56.250125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70598 ] 00:06:32.992 [2024-11-18 14:52:56.399366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.992 [2024-11-18 14:52:56.440463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.369 14:52:57 -- accel/accel.sh@18 -- # out=' 00:06:34.369 SPDK Configuration: 00:06:34.369 Core mask: 0x1 00:06:34.369 00:06:34.369 Accel Perf Configuration: 00:06:34.369 Workload Type: crc32c 00:06:34.369 CRC-32C seed: 0 00:06:34.369 Transfer size: 4096 bytes 00:06:34.369 Vector count 2 00:06:34.369 Module: software 00:06:34.369 Queue depth: 32 00:06:34.369 Allocate depth: 32 00:06:34.369 # threads/core: 1 00:06:34.369 Run time: 1 seconds 00:06:34.369 Verify: Yes 00:06:34.369 00:06:34.369 Running for 1 seconds... 00:06:34.369 00:06:34.369 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.369 ------------------------------------------------------------------------------------ 00:06:34.369 0,0 389120/s 3040 MiB/s 0 0 00:06:34.369 ==================================================================================== 00:06:34.369 Total 389120/s 1520 MiB/s 0 0' 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:34.369 14:52:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:34.369 14:52:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.369 14:52:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.369 14:52:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.369 14:52:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.369 14:52:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.369 14:52:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.369 14:52:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.369 14:52:57 -- accel/accel.sh@42 -- # jq -r . 00:06:34.369 [2024-11-18 14:52:57.651896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.369 [2024-11-18 14:52:57.652017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:06:34.369 [2024-11-18 14:52:57.800761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.369 [2024-11-18 14:52:57.840997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=0x1 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=crc32c 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=0 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=software 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=32 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=32 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=1 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val=Yes 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:34.369 14:52:57 -- accel/accel.sh@21 -- # val= 00:06:34.369 14:52:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # IFS=: 00:06:34.369 14:52:57 -- accel/accel.sh@20 -- # read -r var val 00:06:35.746 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.746 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.746 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.746 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.746 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.746 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.746 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.746 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.746 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.747 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.747 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.747 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.747 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.747 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.747 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:35.747 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:35.747 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:35.747 14:52:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.747 14:52:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:35.747 14:52:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.747 00:06:35.747 real 0m2.792s 00:06:35.747 user 0m2.336s 00:06:35.747 sys 0m0.253s 00:06:35.747 14:52:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.747 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:35.747 ************************************ 00:06:35.747 END TEST accel_crc32c_C2 00:06:35.747 ************************************ 00:06:35.747 14:52:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:35.747 14:52:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.747 14:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.747 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:35.747 ************************************ 00:06:35.747 START TEST accel_copy 00:06:35.747 ************************************ 00:06:35.747 14:52:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:35.747 14:52:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.747 14:52:59 -- accel/accel.sh@17 -- # local accel_module 00:06:35.747 14:52:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:35.748 14:52:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:35.748 14:52:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.748 14:52:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.748 14:52:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.748 14:52:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.748 14:52:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.748 14:52:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.748 14:52:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.748 14:52:59 -- accel/accel.sh@42 -- # jq -r . 00:06:35.748 [2024-11-18 14:52:59.084844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.748 [2024-11-18 14:52:59.084964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70654 ] 00:06:35.748 [2024-11-18 14:52:59.232217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.748 [2024-11-18 14:52:59.273599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.123 14:53:00 -- accel/accel.sh@18 -- # out=' 00:06:37.123 SPDK Configuration: 00:06:37.123 Core mask: 0x1 00:06:37.123 00:06:37.123 Accel Perf Configuration: 00:06:37.123 Workload Type: copy 00:06:37.123 Transfer size: 4096 bytes 00:06:37.123 Vector count 1 00:06:37.123 Module: software 00:06:37.123 Queue depth: 32 00:06:37.123 Allocate depth: 32 00:06:37.123 # threads/core: 1 00:06:37.123 Run time: 1 seconds 00:06:37.123 Verify: Yes 00:06:37.123 00:06:37.123 Running for 1 seconds... 00:06:37.123 00:06:37.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.123 ------------------------------------------------------------------------------------ 00:06:37.123 0,0 311552/s 1217 MiB/s 0 0 00:06:37.123 ==================================================================================== 00:06:37.123 Total 311552/s 1217 MiB/s 0 0' 00:06:37.123 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.123 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.123 14:53:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.123 14:53:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.123 14:53:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.123 14:53:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.123 14:53:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.123 14:53:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.123 14:53:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.123 14:53:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.123 14:53:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.123 14:53:00 -- accel/accel.sh@42 -- # jq -r . 00:06:37.123 [2024-11-18 14:53:00.478232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.123 [2024-11-18 14:53:00.478511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70675 ] 00:06:37.123 [2024-11-18 14:53:00.625602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.123 [2024-11-18 14:53:00.666536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.123 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=0x1 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=copy 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=software 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=32 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=32 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=1 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val=Yes 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:37.381 14:53:00 -- accel/accel.sh@21 -- # val= 00:06:37.381 14:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:37.381 14:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:38.315 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.315 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.315 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.315 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.315 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.315 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.315 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.315 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.315 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.315 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.315 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.316 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.316 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.316 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.316 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.316 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:38.316 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:38.316 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:38.316 14:53:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.316 14:53:01 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:38.316 14:53:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.316 00:06:38.316 real 0m2.787s 00:06:38.316 user 0m2.337s 00:06:38.316 sys 0m0.250s 00:06:38.316 14:53:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.316 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.316 ************************************ 00:06:38.316 END TEST accel_copy 00:06:38.316 ************************************ 00:06:38.316 14:53:01 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.316 14:53:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:38.316 14:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.316 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.316 ************************************ 00:06:38.316 START TEST accel_fill 00:06:38.316 ************************************ 00:06:38.316 14:53:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.316 14:53:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.316 14:53:01 -- accel/accel.sh@17 -- # local accel_module 00:06:38.316 14:53:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.316 14:53:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:38.316 14:53:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.316 14:53:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.316 14:53:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.316 14:53:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.316 14:53:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.316 14:53:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.316 14:53:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.316 14:53:01 -- accel/accel.sh@42 -- # jq -r . 00:06:38.574 [2024-11-18 14:53:01.910136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.574 [2024-11-18 14:53:01.910268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:06:38.574 [2024-11-18 14:53:02.057416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.574 [2024-11-18 14:53:02.097479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.949 14:53:03 -- accel/accel.sh@18 -- # out=' 00:06:39.949 SPDK Configuration: 00:06:39.949 Core mask: 0x1 00:06:39.949 00:06:39.949 Accel Perf Configuration: 00:06:39.949 Workload Type: fill 00:06:39.949 Fill pattern: 0x80 00:06:39.949 Transfer size: 4096 bytes 00:06:39.949 Vector count 1 00:06:39.949 Module: software 00:06:39.949 Queue depth: 64 00:06:39.949 Allocate depth: 64 00:06:39.949 # threads/core: 1 00:06:39.949 Run time: 1 seconds 00:06:39.949 Verify: Yes 00:06:39.949 00:06:39.949 Running for 1 seconds... 00:06:39.949 00:06:39.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.949 ------------------------------------------------------------------------------------ 00:06:39.949 0,0 596224/s 2329 MiB/s 0 0 00:06:39.949 ==================================================================================== 00:06:39.949 Total 596224/s 2329 MiB/s 0 0' 00:06:39.949 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:39.949 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:39.949 14:53:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.949 14:53:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.949 14:53:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.949 14:53:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.949 14:53:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.949 14:53:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.949 14:53:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.949 14:53:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.949 14:53:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.949 14:53:03 -- accel/accel.sh@42 -- # jq -r . 00:06:39.949 [2024-11-18 14:53:03.303118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.949 [2024-11-18 14:53:03.303584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70725 ] 00:06:39.949 [2024-11-18 14:53:03.449474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.949 [2024-11-18 14:53:03.490313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=0x1 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=fill 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=0x80 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=software 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=64 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=64 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=1 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val=Yes 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.208 14:53:03 -- accel/accel.sh@21 -- # val= 00:06:40.208 14:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.208 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:41.143 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.143 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.143 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.143 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.143 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.143 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.143 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.143 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.143 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.143 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.144 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.144 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.144 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.144 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.144 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:41.144 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:41.144 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.144 14:53:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.144 14:53:04 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:41.144 14:53:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.144 00:06:41.144 real 0m2.789s 00:06:41.144 user 0m2.354s 00:06:41.144 sys 0m0.234s 00:06:41.144 ************************************ 00:06:41.144 END TEST accel_fill 00:06:41.144 ************************************ 00:06:41.144 14:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.144 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.144 14:53:04 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:41.144 14:53:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:41.144 14:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.144 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.144 ************************************ 00:06:41.144 START TEST accel_copy_crc32c 00:06:41.144 ************************************ 00:06:41.144 14:53:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:41.144 14:53:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.144 14:53:04 -- accel/accel.sh@17 -- # local accel_module 00:06:41.144 14:53:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:41.144 14:53:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:41.144 14:53:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.144 14:53:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.144 14:53:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.144 14:53:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.144 14:53:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.144 14:53:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.144 14:53:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.144 14:53:04 -- accel/accel.sh@42 -- # jq -r . 00:06:41.402 [2024-11-18 14:53:04.733027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.402 [2024-11-18 14:53:04.733151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70761 ] 00:06:41.402 [2024-11-18 14:53:04.877952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.402 [2024-11-18 14:53:04.918630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.776 14:53:06 -- accel/accel.sh@18 -- # out=' 00:06:42.776 SPDK Configuration: 00:06:42.776 Core mask: 0x1 00:06:42.776 00:06:42.776 Accel Perf Configuration: 00:06:42.776 Workload Type: copy_crc32c 00:06:42.776 CRC-32C seed: 0 00:06:42.776 Vector size: 4096 bytes 00:06:42.776 Transfer size: 4096 bytes 00:06:42.776 Vector count 1 00:06:42.776 Module: software 00:06:42.776 Queue depth: 32 00:06:42.776 Allocate depth: 32 00:06:42.776 # threads/core: 1 00:06:42.776 Run time: 1 seconds 00:06:42.776 Verify: Yes 00:06:42.776 00:06:42.776 Running for 1 seconds... 00:06:42.776 00:06:42.776 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.776 ------------------------------------------------------------------------------------ 00:06:42.776 0,0 298336/s 1165 MiB/s 0 0 00:06:42.776 ==================================================================================== 00:06:42.776 Total 298336/s 1165 MiB/s 0 0' 00:06:42.776 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.776 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.776 14:53:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.776 14:53:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.777 14:53:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.777 14:53:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.777 14:53:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.777 14:53:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.777 14:53:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.777 14:53:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.777 14:53:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.777 14:53:06 -- accel/accel.sh@42 -- # jq -r . 00:06:42.777 [2024-11-18 14:53:06.126503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.777 [2024-11-18 14:53:06.126619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70781 ] 00:06:42.777 [2024-11-18 14:53:06.273488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.777 [2024-11-18 14:53:06.313233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val=0x1 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val=0 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val=software 00:06:42.777 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.777 14:53:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:42.777 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:42.777 14:53:06 -- accel/accel.sh@21 -- # val=32 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val=32 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val=1 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val=Yes 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.035 14:53:06 -- accel/accel.sh@21 -- # val= 00:06:43.035 14:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.035 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.970 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.970 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.970 14:53:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.970 14:53:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:43.970 14:53:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.970 00:06:43.970 real 0m2.784s 00:06:43.970 user 0m2.332s 00:06:43.970 sys 0m0.250s 00:06:43.970 14:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.970 14:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:43.970 ************************************ 00:06:43.970 END TEST accel_copy_crc32c 00:06:43.970 ************************************ 00:06:43.970 14:53:07 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.970 14:53:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:43.970 14:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.970 14:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:43.970 ************************************ 00:06:43.970 START TEST accel_copy_crc32c_C2 00:06:43.970 ************************************ 00:06:43.970 14:53:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.970 14:53:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.970 14:53:07 -- accel/accel.sh@17 -- # local accel_module 00:06:43.970 14:53:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:43.970 14:53:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:43.970 14:53:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.970 14:53:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.970 14:53:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.970 14:53:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.970 14:53:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.970 14:53:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.970 14:53:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.970 14:53:07 -- accel/accel.sh@42 -- # jq -r . 00:06:43.970 [2024-11-18 14:53:07.552979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.970 [2024-11-18 14:53:07.553075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:06:44.227 [2024-11-18 14:53:07.693403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.227 [2024-11-18 14:53:07.732789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.601 14:53:08 -- accel/accel.sh@18 -- # out=' 00:06:45.601 SPDK Configuration: 00:06:45.601 Core mask: 0x1 00:06:45.601 00:06:45.601 Accel Perf Configuration: 00:06:45.601 Workload Type: copy_crc32c 00:06:45.601 CRC-32C seed: 0 00:06:45.601 Vector size: 4096 bytes 00:06:45.601 Transfer size: 8192 bytes 00:06:45.601 Vector count 2 00:06:45.601 Module: software 00:06:45.601 Queue depth: 32 00:06:45.601 Allocate depth: 32 00:06:45.601 # threads/core: 1 00:06:45.601 Run time: 1 seconds 00:06:45.601 Verify: Yes 00:06:45.601 00:06:45.601 Running for 1 seconds... 00:06:45.601 00:06:45.601 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.601 ------------------------------------------------------------------------------------ 00:06:45.601 0,0 224480/s 1753 MiB/s 0 0 00:06:45.601 ==================================================================================== 00:06:45.601 Total 224480/s 876 MiB/s 0 0' 00:06:45.601 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:45.601 14:53:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:45.601 14:53:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.601 14:53:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.601 14:53:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.601 14:53:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.601 14:53:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.601 14:53:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.601 14:53:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.601 14:53:08 -- accel/accel.sh@42 -- # jq -r . 00:06:45.601 [2024-11-18 14:53:08.931589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.601 [2024-11-18 14:53:08.931706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70832 ] 00:06:45.601 [2024-11-18 14:53:09.077710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.601 [2024-11-18 14:53:09.118073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=0x1 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=0 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=software 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=32 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=32 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=1 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val=Yes 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:45.601 14:53:09 -- accel/accel.sh@21 -- # val= 00:06:45.601 14:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:45.601 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.974 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.974 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.974 14:53:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.974 14:53:10 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:46.974 14:53:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.974 00:06:46.974 real 0m2.765s 00:06:46.974 user 0m2.320s 00:06:46.974 sys 0m0.246s 00:06:46.974 14:53:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.974 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.974 ************************************ 00:06:46.974 END TEST accel_copy_crc32c_C2 00:06:46.974 ************************************ 00:06:46.974 14:53:10 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:46.974 14:53:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.974 14:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.974 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.974 ************************************ 00:06:46.974 START TEST accel_dualcast 00:06:46.974 ************************************ 00:06:46.974 14:53:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:46.974 14:53:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.974 14:53:10 -- accel/accel.sh@17 -- # local accel_module 00:06:46.974 14:53:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:46.974 14:53:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:46.974 14:53:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.974 14:53:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.974 14:53:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:53:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:53:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:53:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.974 14:53:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.974 14:53:10 -- accel/accel.sh@42 -- # jq -r . 00:06:46.974 [2024-11-18 14:53:10.359456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.975 [2024-11-18 14:53:10.359585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:06:46.975 [2024-11-18 14:53:10.505180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.975 [2024-11-18 14:53:10.546370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.349 14:53:11 -- accel/accel.sh@18 -- # out=' 00:06:48.349 SPDK Configuration: 00:06:48.349 Core mask: 0x1 00:06:48.349 00:06:48.349 Accel Perf Configuration: 00:06:48.349 Workload Type: dualcast 00:06:48.349 Transfer size: 4096 bytes 00:06:48.349 Vector count 1 00:06:48.349 Module: software 00:06:48.349 Queue depth: 32 00:06:48.349 Allocate depth: 32 00:06:48.349 # threads/core: 1 00:06:48.349 Run time: 1 seconds 00:06:48.349 Verify: Yes 00:06:48.349 00:06:48.349 Running for 1 seconds... 00:06:48.349 00:06:48.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.349 ------------------------------------------------------------------------------------ 00:06:48.349 0,0 454592/s 1775 MiB/s 0 0 00:06:48.349 ==================================================================================== 00:06:48.349 Total 454592/s 1775 MiB/s 0 0' 00:06:48.349 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.349 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.349 14:53:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.350 14:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.350 14:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.350 14:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.350 14:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.350 14:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.350 14:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.350 14:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.350 14:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.350 14:53:11 -- accel/accel.sh@42 -- # jq -r . 00:06:48.350 [2024-11-18 14:53:11.750708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.350 [2024-11-18 14:53:11.750823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:06:48.350 [2024-11-18 14:53:11.897268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.639 [2024-11-18 14:53:11.947503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=0x1 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=dualcast 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=software 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=32 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=32 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=1 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val=Yes 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.639 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.639 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.639 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:48.640 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:48.640 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.640 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:48.640 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@21 -- # val= 00:06:49.599 14:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # IFS=: 00:06:49.599 14:53:13 -- accel/accel.sh@20 -- # read -r var val 00:06:49.599 14:53:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.599 14:53:13 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:49.599 14:53:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.599 00:06:49.599 real 0m2.792s 00:06:49.599 user 0m2.350s 00:06:49.599 sys 0m0.244s 00:06:49.599 14:53:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.599 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:49.599 ************************************ 00:06:49.599 END TEST accel_dualcast 00:06:49.599 ************************************ 00:06:49.599 14:53:13 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:49.599 14:53:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.599 14:53:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.599 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:49.599 ************************************ 00:06:49.599 START TEST accel_compare 00:06:49.599 ************************************ 00:06:49.599 14:53:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:49.599 14:53:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.599 14:53:13 -- accel/accel.sh@17 -- # local accel_module 00:06:49.599 14:53:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:49.599 14:53:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.599 14:53:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.599 14:53:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.599 14:53:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.599 14:53:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.599 14:53:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.599 14:53:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.599 14:53:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.599 14:53:13 -- accel/accel.sh@42 -- # jq -r . 00:06:49.858 [2024-11-18 14:53:13.188913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.858 [2024-11-18 14:53:13.189032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ] 00:06:49.858 [2024-11-18 14:53:13.336123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.858 [2024-11-18 14:53:13.377166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.234 14:53:14 -- accel/accel.sh@18 -- # out=' 00:06:51.234 SPDK Configuration: 00:06:51.234 Core mask: 0x1 00:06:51.234 00:06:51.234 Accel Perf Configuration: 00:06:51.234 Workload Type: compare 00:06:51.234 Transfer size: 4096 bytes 00:06:51.234 Vector count 1 00:06:51.234 Module: software 00:06:51.234 Queue depth: 32 00:06:51.234 Allocate depth: 32 00:06:51.234 # threads/core: 1 00:06:51.234 Run time: 1 seconds 00:06:51.234 Verify: Yes 00:06:51.234 00:06:51.234 Running for 1 seconds... 00:06:51.234 00:06:51.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.234 ------------------------------------------------------------------------------------ 00:06:51.234 0,0 430304/s 1680 MiB/s 0 0 00:06:51.234 ==================================================================================== 00:06:51.234 Total 430304/s 1680 MiB/s 0 0' 00:06:51.234 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.234 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.234 14:53:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:51.234 14:53:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:51.234 14:53:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.234 14:53:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.234 14:53:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.234 14:53:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.234 14:53:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.234 14:53:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.234 14:53:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.234 14:53:14 -- accel/accel.sh@42 -- # jq -r . 00:06:51.234 [2024-11-18 14:53:14.597621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.234 [2024-11-18 14:53:14.597742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70938 ] 00:06:51.234 [2024-11-18 14:53:14.744463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.234 [2024-11-18 14:53:14.787144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=0x1 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=compare 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=software 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=32 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=32 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val=1 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.492 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.492 14:53:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.492 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.493 14:53:14 -- accel/accel.sh@21 -- # val=Yes 00:06:51.493 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.493 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.493 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.493 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:51.493 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.493 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.428 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 14:53:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.428 14:53:15 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:52.428 14:53:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.428 00:06:52.428 real 0m2.814s 00:06:52.428 user 0m2.360s 00:06:52.428 sys 0m0.255s 00:06:52.428 14:53:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.428 ************************************ 00:06:52.428 END TEST accel_compare 00:06:52.428 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.428 ************************************ 00:06:52.428 14:53:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:52.428 14:53:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.428 14:53:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.428 14:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.428 ************************************ 00:06:52.428 START TEST accel_xor 00:06:52.428 ************************************ 00:06:52.428 14:53:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:52.428 14:53:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.428 14:53:16 -- accel/accel.sh@17 -- # local accel_module 00:06:52.428 14:53:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:52.428 14:53:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:52.428 14:53:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.428 14:53:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.428 14:53:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.428 14:53:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.428 14:53:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.428 14:53:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.428 14:53:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.428 14:53:16 -- accel/accel.sh@42 -- # jq -r . 00:06:52.686 [2024-11-18 14:53:16.040965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.686 [2024-11-18 14:53:16.041086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70974 ] 00:06:52.686 [2024-11-18 14:53:16.188297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.686 [2024-11-18 14:53:16.229419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.066 14:53:17 -- accel/accel.sh@18 -- # out=' 00:06:54.066 SPDK Configuration: 00:06:54.066 Core mask: 0x1 00:06:54.066 00:06:54.066 Accel Perf Configuration: 00:06:54.066 Workload Type: xor 00:06:54.066 Source buffers: 2 00:06:54.066 Transfer size: 4096 bytes 00:06:54.066 Vector count 1 00:06:54.066 Module: software 00:06:54.066 Queue depth: 32 00:06:54.066 Allocate depth: 32 00:06:54.066 # threads/core: 1 00:06:54.066 Run time: 1 seconds 00:06:54.066 Verify: Yes 00:06:54.066 00:06:54.066 Running for 1 seconds... 00:06:54.066 00:06:54.066 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.066 ------------------------------------------------------------------------------------ 00:06:54.066 0,0 338368/s 1321 MiB/s 0 0 00:06:54.066 ==================================================================================== 00:06:54.066 Total 338368/s 1321 MiB/s 0 0' 00:06:54.066 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.066 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.066 14:53:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:54.066 14:53:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:54.066 14:53:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.066 14:53:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.066 14:53:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.066 14:53:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.066 14:53:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.066 14:53:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.066 14:53:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.066 14:53:17 -- accel/accel.sh@42 -- # jq -r . 00:06:54.066 [2024-11-18 14:53:17.436895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.066 [2024-11-18 14:53:17.436999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70994 ] 00:06:54.066 [2024-11-18 14:53:17.584354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.066 [2024-11-18 14:53:17.623269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.324 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.324 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.324 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.324 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.324 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.324 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.324 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.324 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.324 14:53:17 -- accel/accel.sh@21 -- # val=0x1 00:06:54.324 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.324 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=xor 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=2 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=software 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=32 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=32 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=1 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val=Yes 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:54.325 14:53:17 -- accel/accel.sh@21 -- # val= 00:06:54.325 14:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # IFS=: 00:06:54.325 14:53:17 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.261 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.261 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.261 14:53:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.261 14:53:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:55.262 14:53:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.262 00:06:55.262 real 0m2.780s 00:06:55.262 user 0m2.345s 00:06:55.262 sys 0m0.234s 00:06:55.262 14:53:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.262 14:53:18 -- common/autotest_common.sh@10 -- # set +x 00:06:55.262 ************************************ 00:06:55.262 END TEST accel_xor 00:06:55.262 ************************************ 00:06:55.262 14:53:18 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:55.262 14:53:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.262 14:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.262 14:53:18 -- common/autotest_common.sh@10 -- # set +x 00:06:55.262 ************************************ 00:06:55.262 START TEST accel_xor 00:06:55.262 ************************************ 00:06:55.262 14:53:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:55.262 14:53:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.262 14:53:18 -- accel/accel.sh@17 -- # local accel_module 00:06:55.262 14:53:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:55.262 14:53:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:55.262 14:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.262 14:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.262 14:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.262 14:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.262 14:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.262 14:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.262 14:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.262 14:53:18 -- accel/accel.sh@42 -- # jq -r . 00:06:55.520 [2024-11-18 14:53:18.862814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.520 [2024-11-18 14:53:18.862927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:06:55.520 [2024-11-18 14:53:19.014174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.520 [2024-11-18 14:53:19.052639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.895 14:53:20 -- accel/accel.sh@18 -- # out=' 00:06:56.895 SPDK Configuration: 00:06:56.895 Core mask: 0x1 00:06:56.895 00:06:56.895 Accel Perf Configuration: 00:06:56.895 Workload Type: xor 00:06:56.895 Source buffers: 3 00:06:56.895 Transfer size: 4096 bytes 00:06:56.895 Vector count 1 00:06:56.895 Module: software 00:06:56.895 Queue depth: 32 00:06:56.895 Allocate depth: 32 00:06:56.895 # threads/core: 1 00:06:56.895 Run time: 1 seconds 00:06:56.895 Verify: Yes 00:06:56.895 00:06:56.895 Running for 1 seconds... 00:06:56.895 00:06:56.895 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.895 ------------------------------------------------------------------------------------ 00:06:56.895 0,0 419456/s 1638 MiB/s 0 0 00:06:56.895 ==================================================================================== 00:06:56.895 Total 419456/s 1638 MiB/s 0 0' 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:56.895 14:53:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.895 14:53:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.895 14:53:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.895 14:53:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.895 14:53:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.895 14:53:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.895 14:53:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.895 14:53:20 -- accel/accel.sh@42 -- # jq -r . 00:06:56.895 [2024-11-18 14:53:20.249229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.895 [2024-11-18 14:53:20.249356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71045 ] 00:06:56.895 [2024-11-18 14:53:20.396155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.895 [2024-11-18 14:53:20.435008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val=0x1 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val=xor 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:56.895 14:53:20 -- accel/accel.sh@21 -- # val=3 00:06:56.895 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:56.895 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val=software 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val=32 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val=32 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val=1 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val=Yes 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:57.154 14:53:20 -- accel/accel.sh@21 -- # val= 00:06:57.154 14:53:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # IFS=: 00:06:57.154 14:53:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.090 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.090 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.090 14:53:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.090 14:53:21 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:58.090 14:53:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.090 00:06:58.090 real 0m2.773s 00:06:58.090 user 0m2.333s 00:06:58.090 sys 0m0.238s 00:06:58.090 14:53:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.090 ************************************ 00:06:58.090 END TEST accel_xor 00:06:58.090 14:53:21 -- common/autotest_common.sh@10 -- # set +x 00:06:58.090 ************************************ 00:06:58.090 14:53:21 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:58.090 14:53:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:58.090 14:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.090 14:53:21 -- common/autotest_common.sh@10 -- # set +x 00:06:58.090 ************************************ 00:06:58.090 START TEST accel_dif_verify 00:06:58.090 ************************************ 00:06:58.090 14:53:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:58.090 14:53:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.090 14:53:21 -- accel/accel.sh@17 -- # local accel_module 00:06:58.090 14:53:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:58.090 14:53:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:58.090 14:53:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.090 14:53:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.090 14:53:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.090 14:53:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.090 14:53:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.090 14:53:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.090 14:53:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.090 14:53:21 -- accel/accel.sh@42 -- # jq -r . 00:06:58.090 [2024-11-18 14:53:21.675493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.090 [2024-11-18 14:53:21.675599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71080 ] 00:06:58.347 [2024-11-18 14:53:21.820876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.347 [2024-11-18 14:53:21.859110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.721 14:53:23 -- accel/accel.sh@18 -- # out=' 00:06:59.721 SPDK Configuration: 00:06:59.721 Core mask: 0x1 00:06:59.721 00:06:59.721 Accel Perf Configuration: 00:06:59.721 Workload Type: dif_verify 00:06:59.721 Vector size: 4096 bytes 00:06:59.721 Transfer size: 4096 bytes 00:06:59.721 Block size: 512 bytes 00:06:59.721 Metadata size: 8 bytes 00:06:59.721 Vector count 1 00:06:59.721 Module: software 00:06:59.721 Queue depth: 32 00:06:59.721 Allocate depth: 32 00:06:59.721 # threads/core: 1 00:06:59.721 Run time: 1 seconds 00:06:59.721 Verify: No 00:06:59.721 00:06:59.722 Running for 1 seconds... 00:06:59.722 00:06:59.722 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.722 ------------------------------------------------------------------------------------ 00:06:59.722 0,0 128416/s 509 MiB/s 0 0 00:06:59.722 ==================================================================================== 00:06:59.722 Total 128416/s 501 MiB/s 0 0' 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:59.722 14:53:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:59.722 14:53:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.722 14:53:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.722 14:53:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.722 14:53:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.722 14:53:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.722 14:53:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.722 14:53:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.722 14:53:23 -- accel/accel.sh@42 -- # jq -r . 00:06:59.722 [2024-11-18 14:53:23.058226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.722 [2024-11-18 14:53:23.058349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71103 ] 00:06:59.722 [2024-11-18 14:53:23.201911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.722 [2024-11-18 14:53:23.239441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=0x1 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=dif_verify 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=software 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=32 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=32 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=1 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val=No 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:06:59.722 14:53:23 -- accel/accel.sh@21 -- # val= 00:06:59.722 14:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # IFS=: 00:06:59.722 14:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:01.095 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.095 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.095 14:53:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.095 14:53:24 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:01.095 14:53:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.095 00:07:01.095 real 0m2.763s 00:07:01.095 user 0m2.335s 00:07:01.095 sys 0m0.227s 00:07:01.095 14:53:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.095 14:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:01.095 ************************************ 00:07:01.095 END TEST accel_dif_verify 00:07:01.095 ************************************ 00:07:01.095 14:53:24 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:01.095 14:53:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:01.095 14:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.095 14:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:01.095 ************************************ 00:07:01.095 START TEST accel_dif_generate 00:07:01.095 ************************************ 00:07:01.095 14:53:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:01.095 14:53:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.095 14:53:24 -- accel/accel.sh@17 -- # local accel_module 00:07:01.095 14:53:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:01.095 14:53:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:01.095 14:53:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.095 14:53:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.095 14:53:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.095 14:53:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.095 14:53:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.095 14:53:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.095 14:53:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.095 14:53:24 -- accel/accel.sh@42 -- # jq -r . 00:07:01.095 [2024-11-18 14:53:24.475306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.095 [2024-11-18 14:53:24.475404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:07:01.095 [2024-11-18 14:53:24.617161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.095 [2024-11-18 14:53:24.654749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.473 14:53:25 -- accel/accel.sh@18 -- # out=' 00:07:02.473 SPDK Configuration: 00:07:02.473 Core mask: 0x1 00:07:02.473 00:07:02.473 Accel Perf Configuration: 00:07:02.473 Workload Type: dif_generate 00:07:02.473 Vector size: 4096 bytes 00:07:02.473 Transfer size: 4096 bytes 00:07:02.473 Block size: 512 bytes 00:07:02.473 Metadata size: 8 bytes 00:07:02.473 Vector count 1 00:07:02.473 Module: software 00:07:02.473 Queue depth: 32 00:07:02.473 Allocate depth: 32 00:07:02.473 # threads/core: 1 00:07:02.473 Run time: 1 seconds 00:07:02.473 Verify: No 00:07:02.473 00:07:02.473 Running for 1 seconds... 00:07:02.473 00:07:02.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.473 ------------------------------------------------------------------------------------ 00:07:02.473 0,0 128352/s 509 MiB/s 0 0 00:07:02.473 ==================================================================================== 00:07:02.473 Total 128352/s 501 MiB/s 0 0' 00:07:02.473 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:02.473 14:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:02.473 14:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.473 14:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.473 14:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.473 14:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.473 14:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.473 14:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.473 14:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.473 14:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:02.473 [2024-11-18 14:53:25.829857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.473 [2024-11-18 14:53:25.829955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71153 ] 00:07:02.473 [2024-11-18 14:53:25.977949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.473 [2024-11-18 14:53:26.008603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=0x1 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=dif_generate 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=software 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=32 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=32 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=1 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val=No 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:02.473 14:53:26 -- accel/accel.sh@21 -- # val= 00:07:02.473 14:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:02.473 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.847 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.847 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.847 14:53:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.847 14:53:27 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:03.847 14:53:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.847 00:07:03.847 real 0m2.701s 00:07:03.847 user 0m2.287s 00:07:03.847 sys 0m0.218s 00:07:03.847 14:53:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.847 14:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:03.847 ************************************ 00:07:03.847 END TEST accel_dif_generate 00:07:03.847 ************************************ 00:07:03.847 14:53:27 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:03.847 14:53:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:03.847 14:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.847 14:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:03.847 ************************************ 00:07:03.847 START TEST accel_dif_generate_copy 00:07:03.847 ************************************ 00:07:03.847 14:53:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:03.847 14:53:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.847 14:53:27 -- accel/accel.sh@17 -- # local accel_module 00:07:03.847 14:53:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.847 14:53:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.848 14:53:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.848 14:53:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.848 14:53:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.848 14:53:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.848 14:53:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.848 14:53:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.848 14:53:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.848 14:53:27 -- accel/accel.sh@42 -- # jq -r . 00:07:03.848 [2024-11-18 14:53:27.215553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.848 [2024-11-18 14:53:27.215654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:07:03.848 [2024-11-18 14:53:27.354751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.848 [2024-11-18 14:53:27.384407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.223 14:53:28 -- accel/accel.sh@18 -- # out=' 00:07:05.223 SPDK Configuration: 00:07:05.223 Core mask: 0x1 00:07:05.223 00:07:05.223 Accel Perf Configuration: 00:07:05.223 Workload Type: dif_generate_copy 00:07:05.223 Vector size: 4096 bytes 00:07:05.223 Transfer size: 4096 bytes 00:07:05.223 Vector count 1 00:07:05.223 Module: software 00:07:05.223 Queue depth: 32 00:07:05.223 Allocate depth: 32 00:07:05.223 # threads/core: 1 00:07:05.223 Run time: 1 seconds 00:07:05.223 Verify: No 00:07:05.223 00:07:05.223 Running for 1 seconds... 00:07:05.223 00:07:05.223 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.223 ------------------------------------------------------------------------------------ 00:07:05.223 0,0 90624/s 359 MiB/s 0 0 00:07:05.223 ==================================================================================== 00:07:05.223 Total 90624/s 354 MiB/s 0 0' 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.223 14:53:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:05.223 14:53:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:05.223 14:53:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.223 14:53:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.223 14:53:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.223 14:53:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.223 14:53:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.223 14:53:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.223 14:53:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.223 14:53:28 -- accel/accel.sh@42 -- # jq -r . 00:07:05.223 [2024-11-18 14:53:28.559253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.223 [2024-11-18 14:53:28.559450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:07:05.223 [2024-11-18 14:53:28.706705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.223 [2024-11-18 14:53:28.736848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.223 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.223 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.223 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.223 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.223 14:53:28 -- accel/accel.sh@21 -- # val=0x1 00:07:05.223 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.223 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.223 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.223 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.223 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.223 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=software 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=32 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=32 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=1 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val=No 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.224 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.224 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.224 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@21 -- # val= 00:07:06.600 14:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.600 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.600 14:53:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.600 14:53:29 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:06.600 14:53:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.600 00:07:06.600 real 0m2.694s 00:07:06.600 user 0m2.283s 00:07:06.600 sys 0m0.207s 00:07:06.600 14:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.600 ************************************ 00:07:06.600 END TEST accel_dif_generate_copy 00:07:06.600 ************************************ 00:07:06.600 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:06.600 14:53:29 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:06.600 14:53:29 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.600 14:53:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:06.600 14:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.600 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:06.600 ************************************ 00:07:06.600 START TEST accel_comp 00:07:06.600 ************************************ 00:07:06.600 14:53:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.600 14:53:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.600 14:53:29 -- accel/accel.sh@17 -- # local accel_module 00:07:06.600 14:53:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.600 14:53:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.600 14:53:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.600 14:53:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.600 14:53:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.600 14:53:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.600 14:53:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.600 14:53:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.600 14:53:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.600 14:53:29 -- accel/accel.sh@42 -- # jq -r . 00:07:06.600 [2024-11-18 14:53:29.959730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.600 [2024-11-18 14:53:29.959857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71245 ] 00:07:06.600 [2024-11-18 14:53:30.107764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.600 [2024-11-18 14:53:30.138902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.974 14:53:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.974 00:07:07.974 SPDK Configuration: 00:07:07.974 Core mask: 0x1 00:07:07.974 00:07:07.974 Accel Perf Configuration: 00:07:07.974 Workload Type: compress 00:07:07.974 Transfer size: 4096 bytes 00:07:07.974 Vector count 1 00:07:07.974 Module: software 00:07:07.974 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.974 Queue depth: 32 00:07:07.974 Allocate depth: 32 00:07:07.974 # threads/core: 1 00:07:07.974 Run time: 1 seconds 00:07:07.974 Verify: No 00:07:07.974 00:07:07.974 Running for 1 seconds... 00:07:07.974 00:07:07.974 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.974 ------------------------------------------------------------------------------------ 00:07:07.974 0,0 49024/s 204 MiB/s 0 0 00:07:07.974 ==================================================================================== 00:07:07.974 Total 49024/s 191 MiB/s 0 0' 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.974 14:53:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.974 14:53:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.974 14:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.974 14:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.974 14:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.974 14:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.974 14:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.974 14:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.974 14:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.974 14:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:07.974 [2024-11-18 14:53:31.313234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.974 [2024-11-18 14:53:31.313352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71260 ] 00:07:07.974 [2024-11-18 14:53:31.456907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.974 [2024-11-18 14:53:31.488551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.974 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.974 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.974 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.974 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.974 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.974 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.974 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.974 14:53:31 -- accel/accel.sh@21 -- # val=0x1 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=compress 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=software 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=32 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=32 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=1 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val=No 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:07.975 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:07.975 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:07.975 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.348 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.348 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.348 14:53:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.348 14:53:32 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:09.348 14:53:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.348 00:07:09.348 real 0m2.707s 00:07:09.348 user 0m2.288s 00:07:09.348 sys 0m0.219s 00:07:09.348 14:53:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.348 ************************************ 00:07:09.348 END TEST accel_comp 00:07:09.348 ************************************ 00:07:09.348 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:09.348 14:53:32 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.348 14:53:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:09.348 14:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.348 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:09.348 ************************************ 00:07:09.348 START TEST accel_decomp 00:07:09.348 ************************************ 00:07:09.348 14:53:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.348 14:53:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.348 14:53:32 -- accel/accel.sh@17 -- # local accel_module 00:07:09.348 14:53:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.348 14:53:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.348 14:53:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.348 14:53:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.348 14:53:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.348 14:53:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.348 14:53:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.348 14:53:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.348 14:53:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.348 14:53:32 -- accel/accel.sh@42 -- # jq -r . 00:07:09.348 [2024-11-18 14:53:32.719392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.348 [2024-11-18 14:53:32.719498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71290 ] 00:07:09.348 [2024-11-18 14:53:32.867004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.348 [2024-11-18 14:53:32.898191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.728 14:53:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.728 00:07:10.728 SPDK Configuration: 00:07:10.728 Core mask: 0x1 00:07:10.728 00:07:10.728 Accel Perf Configuration: 00:07:10.728 Workload Type: decompress 00:07:10.728 Transfer size: 4096 bytes 00:07:10.728 Vector count 1 00:07:10.728 Module: software 00:07:10.728 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.728 Queue depth: 32 00:07:10.728 Allocate depth: 32 00:07:10.728 # threads/core: 1 00:07:10.728 Run time: 1 seconds 00:07:10.728 Verify: Yes 00:07:10.728 00:07:10.728 Running for 1 seconds... 00:07:10.728 00:07:10.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.728 ------------------------------------------------------------------------------------ 00:07:10.728 0,0 62176/s 114 MiB/s 0 0 00:07:10.728 ==================================================================================== 00:07:10.728 Total 62176/s 242 MiB/s 0 0' 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.728 14:53:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.728 14:53:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.728 14:53:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.728 14:53:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.728 14:53:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.728 14:53:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.728 14:53:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.728 14:53:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.728 14:53:34 -- accel/accel.sh@42 -- # jq -r . 00:07:10.728 [2024-11-18 14:53:34.072252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.728 [2024-11-18 14:53:34.072389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71313 ] 00:07:10.728 [2024-11-18 14:53:34.220073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.728 [2024-11-18 14:53:34.250727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=0x1 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=decompress 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=software 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=32 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=32 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=1 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val=Yes 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.728 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.728 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.728 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:12.103 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.103 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.103 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.103 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.103 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.103 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.103 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.103 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.103 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.104 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.104 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.104 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.104 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.104 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.104 14:53:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.104 14:53:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.104 ************************************ 00:07:12.104 END TEST accel_decomp 00:07:12.104 ************************************ 00:07:12.104 14:53:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.104 00:07:12.104 real 0m2.697s 00:07:12.104 user 0m2.290s 00:07:12.104 sys 0m0.206s 00:07:12.104 14:53:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.104 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:12.104 14:53:35 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.104 14:53:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:12.104 14:53:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.104 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:12.104 ************************************ 00:07:12.104 START TEST accel_decmop_full 00:07:12.104 ************************************ 00:07:12.104 14:53:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.104 14:53:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.104 14:53:35 -- accel/accel.sh@17 -- # local accel_module 00:07:12.104 14:53:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.104 14:53:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.104 14:53:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.104 14:53:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.104 14:53:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.104 14:53:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.104 14:53:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.104 14:53:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.104 14:53:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.104 14:53:35 -- accel/accel.sh@42 -- # jq -r . 00:07:12.104 [2024-11-18 14:53:35.478229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.104 [2024-11-18 14:53:35.478324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71349 ] 00:07:12.104 [2024-11-18 14:53:35.611077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.104 [2024-11-18 14:53:35.638597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.478 14:53:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.478 00:07:13.478 SPDK Configuration: 00:07:13.478 Core mask: 0x1 00:07:13.478 00:07:13.478 Accel Perf Configuration: 00:07:13.478 Workload Type: decompress 00:07:13.478 Transfer size: 111250 bytes 00:07:13.478 Vector count 1 00:07:13.478 Module: software 00:07:13.478 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.478 Queue depth: 32 00:07:13.478 Allocate depth: 32 00:07:13.478 # threads/core: 1 00:07:13.478 Run time: 1 seconds 00:07:13.478 Verify: Yes 00:07:13.478 00:07:13.478 Running for 1 seconds... 00:07:13.478 00:07:13.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.478 ------------------------------------------------------------------------------------ 00:07:13.478 0,0 5824/s 240 MiB/s 0 0 00:07:13.478 ==================================================================================== 00:07:13.478 Total 5824/s 617 MiB/s 0 0' 00:07:13.478 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.478 14:53:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.478 14:53:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.478 14:53:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.478 14:53:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.478 14:53:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.478 14:53:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.478 14:53:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.478 14:53:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.478 14:53:36 -- accel/accel.sh@42 -- # jq -r . 00:07:13.478 [2024-11-18 14:53:36.809690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.478 [2024-11-18 14:53:36.809798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71364 ] 00:07:13.478 [2024-11-18 14:53:36.956350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.478 [2024-11-18 14:53:36.983452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val=0x1 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val=decompress 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.478 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.478 14:53:37 -- accel/accel.sh@21 -- # val=software 00:07:13.478 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.478 14:53:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val=32 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val=32 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val=1 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val=Yes 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:13.479 14:53:37 -- accel/accel.sh@21 -- # val= 00:07:13.479 14:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:13.479 14:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:14.853 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:14.853 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:14.853 14:53:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.853 14:53:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.853 14:53:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.853 00:07:14.853 real 0m2.670s 00:07:14.853 user 0m2.287s 00:07:14.853 sys 0m0.186s 00:07:14.853 14:53:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.853 ************************************ 00:07:14.853 END TEST accel_decmop_full 00:07:14.853 ************************************ 00:07:14.853 14:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 14:53:38 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.853 14:53:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:14.853 14:53:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.853 14:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.853 ************************************ 00:07:14.853 START TEST accel_decomp_mcore 00:07:14.853 ************************************ 00:07:14.853 14:53:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.853 14:53:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.853 14:53:38 -- accel/accel.sh@17 -- # local accel_module 00:07:14.853 14:53:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.853 14:53:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:14.853 14:53:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.853 14:53:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.853 14:53:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.853 14:53:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.853 14:53:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.853 14:53:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.853 14:53:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.853 14:53:38 -- accel/accel.sh@42 -- # jq -r . 00:07:14.853 [2024-11-18 14:53:38.205569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.853 [2024-11-18 14:53:38.205688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71400 ] 00:07:14.853 [2024-11-18 14:53:38.352674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.853 [2024-11-18 14:53:38.397142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.853 [2024-11-18 14:53:38.397717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.853 [2024-11-18 14:53:38.397967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.853 [2024-11-18 14:53:38.398043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.232 14:53:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.232 00:07:16.232 SPDK Configuration: 00:07:16.232 Core mask: 0xf 00:07:16.232 00:07:16.232 Accel Perf Configuration: 00:07:16.232 Workload Type: decompress 00:07:16.232 Transfer size: 4096 bytes 00:07:16.232 Vector count 1 00:07:16.232 Module: software 00:07:16.232 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.232 Queue depth: 32 00:07:16.232 Allocate depth: 32 00:07:16.232 # threads/core: 1 00:07:16.232 Run time: 1 seconds 00:07:16.232 Verify: Yes 00:07:16.232 00:07:16.232 Running for 1 seconds... 00:07:16.232 00:07:16.232 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.232 ------------------------------------------------------------------------------------ 00:07:16.232 0,0 64096/s 118 MiB/s 0 0 00:07:16.232 3,0 51840/s 95 MiB/s 0 0 00:07:16.232 2,0 52576/s 96 MiB/s 0 0 00:07:16.232 1,0 51872/s 95 MiB/s 0 0 00:07:16.232 ==================================================================================== 00:07:16.232 Total 220384/s 860 MiB/s 0 0' 00:07:16.232 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.232 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.232 14:53:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.232 14:53:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.232 14:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.232 14:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.232 14:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.232 14:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.232 14:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.232 14:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.232 14:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.232 14:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:16.232 [2024-11-18 14:53:39.621499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.232 [2024-11-18 14:53:39.621611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71423 ] 00:07:16.232 [2024-11-18 14:53:39.765840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.232 [2024-11-18 14:53:39.807080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.232 [2024-11-18 14:53:39.807426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.232 [2024-11-18 14:53:39.807626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.232 [2024-11-18 14:53:39.807694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=0xf 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=decompress 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=software 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=32 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=32 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=1 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val=Yes 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.491 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.491 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.491 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@21 -- # val= 00:07:17.427 14:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:17.427 14:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:17.427 14:53:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.427 14:53:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.427 14:53:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.427 00:07:17.427 real 0m2.816s 00:07:17.427 user 0m8.952s 00:07:17.427 sys 0m0.285s 00:07:17.427 ************************************ 00:07:17.427 END TEST accel_decomp_mcore 00:07:17.427 ************************************ 00:07:17.427 14:53:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.427 14:53:40 -- common/autotest_common.sh@10 -- # set +x 00:07:17.686 14:53:41 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.686 14:53:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:17.686 14:53:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.686 14:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:17.686 ************************************ 00:07:17.686 START TEST accel_decomp_full_mcore 00:07:17.686 ************************************ 00:07:17.686 14:53:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.686 14:53:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.686 14:53:41 -- accel/accel.sh@17 -- # local accel_module 00:07:17.686 14:53:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.686 14:53:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.686 14:53:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.686 14:53:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.686 14:53:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.686 14:53:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.686 14:53:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.686 14:53:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.686 14:53:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.686 14:53:41 -- accel/accel.sh@42 -- # jq -r . 00:07:17.686 [2024-11-18 14:53:41.085908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.686 [2024-11-18 14:53:41.086011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71456 ] 00:07:17.686 [2024-11-18 14:53:41.233637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.944 [2024-11-18 14:53:41.275068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.944 [2024-11-18 14:53:41.275401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.944 [2024-11-18 14:53:41.275585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.944 [2024-11-18 14:53:41.275659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.903 14:53:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.903 00:07:18.903 SPDK Configuration: 00:07:18.903 Core mask: 0xf 00:07:18.903 00:07:18.903 Accel Perf Configuration: 00:07:18.903 Workload Type: decompress 00:07:18.903 Transfer size: 111250 bytes 00:07:18.903 Vector count 1 00:07:18.903 Module: software 00:07:18.903 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.903 Queue depth: 32 00:07:18.903 Allocate depth: 32 00:07:18.903 # threads/core: 1 00:07:18.903 Run time: 1 seconds 00:07:18.903 Verify: Yes 00:07:18.903 00:07:18.903 Running for 1 seconds... 00:07:18.903 00:07:18.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.903 ------------------------------------------------------------------------------------ 00:07:18.903 0,0 5728/s 236 MiB/s 0 0 00:07:18.903 3,0 4320/s 178 MiB/s 0 0 00:07:18.903 2,0 4320/s 178 MiB/s 0 0 00:07:18.903 1,0 4320/s 178 MiB/s 0 0 00:07:18.903 ==================================================================================== 00:07:18.903 Total 18688/s 1982 MiB/s 0 0' 00:07:18.903 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:18.903 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:18.903 14:53:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.904 14:53:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.904 14:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.904 14:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.904 14:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.904 14:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.904 14:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.904 14:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.904 14:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.904 14:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:19.163 [2024-11-18 14:53:42.510217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.163 [2024-11-18 14:53:42.510342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71485 ] 00:07:19.163 [2024-11-18 14:53:42.654128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.163 [2024-11-18 14:53:42.685792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.163 [2024-11-18 14:53:42.686024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.163 [2024-11-18 14:53:42.686123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.163 [2024-11-18 14:53:42.686191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=0xf 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=decompress 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=software 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=32 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=32 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=1 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val=Yes 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.163 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.163 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.163 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@21 -- # val= 00:07:20.538 14:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:20.538 14:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:20.538 14:53:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.538 14:53:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.538 14:53:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.538 00:07:20.538 real 0m2.832s 00:07:20.538 user 0m9.037s 00:07:20.538 sys 0m0.265s 00:07:20.538 ************************************ 00:07:20.538 END TEST accel_decomp_full_mcore 00:07:20.538 ************************************ 00:07:20.538 14:53:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.538 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 14:53:43 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.538 14:53:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:20.538 14:53:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.538 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 ************************************ 00:07:20.538 START TEST accel_decomp_mthread 00:07:20.538 ************************************ 00:07:20.538 14:53:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.538 14:53:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.538 14:53:43 -- accel/accel.sh@17 -- # local accel_module 00:07:20.538 14:53:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.538 14:53:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:20.538 14:53:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.538 14:53:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.538 14:53:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.538 14:53:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.538 14:53:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.538 14:53:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.538 14:53:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.538 14:53:43 -- accel/accel.sh@42 -- # jq -r . 00:07:20.538 [2024-11-18 14:53:43.962307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.538 [2024-11-18 14:53:43.962576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71518 ] 00:07:20.538 [2024-11-18 14:53:44.108125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.797 [2024-11-18 14:53:44.149896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.170 14:53:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.170 00:07:22.170 SPDK Configuration: 00:07:22.170 Core mask: 0x1 00:07:22.170 00:07:22.170 Accel Perf Configuration: 00:07:22.170 Workload Type: decompress 00:07:22.170 Transfer size: 4096 bytes 00:07:22.170 Vector count 1 00:07:22.170 Module: software 00:07:22.170 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.170 Queue depth: 32 00:07:22.170 Allocate depth: 32 00:07:22.170 # threads/core: 2 00:07:22.170 Run time: 1 seconds 00:07:22.170 Verify: Yes 00:07:22.170 00:07:22.170 Running for 1 seconds... 00:07:22.170 00:07:22.170 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.170 ------------------------------------------------------------------------------------ 00:07:22.170 0,1 31360/s 57 MiB/s 0 0 00:07:22.170 0,0 31264/s 57 MiB/s 0 0 00:07:22.170 ==================================================================================== 00:07:22.170 Total 62624/s 244 MiB/s 0 0' 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.170 14:53:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.170 14:53:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.170 14:53:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.170 14:53:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.170 14:53:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.170 14:53:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.170 14:53:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.170 14:53:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.170 14:53:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.170 14:53:45 -- accel/accel.sh@42 -- # jq -r . 00:07:22.170 [2024-11-18 14:53:45.373813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.170 [2024-11-18 14:53:45.373926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71539 ] 00:07:22.170 [2024-11-18 14:53:45.523294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.170 [2024-11-18 14:53:45.567002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.170 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.170 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.170 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.170 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.170 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.170 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.170 14:53:45 -- accel/accel.sh@21 -- # val=0x1 00:07:22.170 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.170 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.170 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=decompress 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=software 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=32 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=32 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=2 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val=Yes 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.171 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.171 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.171 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@21 -- # val= 00:07:23.555 14:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.555 14:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.555 14:53:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.555 14:53:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.555 14:53:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.555 00:07:23.555 real 0m2.896s 00:07:23.555 user 0m2.433s 00:07:23.555 sys 0m0.259s 00:07:23.555 14:53:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.555 14:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.555 ************************************ 00:07:23.555 END TEST accel_decomp_mthread 00:07:23.555 ************************************ 00:07:23.555 14:53:46 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.555 14:53:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:23.555 14:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.555 14:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.555 ************************************ 00:07:23.555 START TEST accel_deomp_full_mthread 00:07:23.555 ************************************ 00:07:23.555 14:53:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.555 14:53:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.555 14:53:46 -- accel/accel.sh@17 -- # local accel_module 00:07:23.555 14:53:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.555 14:53:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.555 14:53:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.555 14:53:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.555 14:53:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.555 14:53:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.555 14:53:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.555 14:53:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.555 14:53:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.555 14:53:46 -- accel/accel.sh@42 -- # jq -r . 00:07:23.555 [2024-11-18 14:53:46.924695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.555 [2024-11-18 14:53:46.925016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71574 ] 00:07:23.555 [2024-11-18 14:53:47.075447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.816 [2024-11-18 14:53:47.148760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.199 14:53:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.199 00:07:25.199 SPDK Configuration: 00:07:25.199 Core mask: 0x1 00:07:25.199 00:07:25.199 Accel Perf Configuration: 00:07:25.199 Workload Type: decompress 00:07:25.199 Transfer size: 111250 bytes 00:07:25.199 Vector count 1 00:07:25.199 Module: software 00:07:25.199 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.199 Queue depth: 32 00:07:25.199 Allocate depth: 32 00:07:25.199 # threads/core: 2 00:07:25.199 Run time: 1 seconds 00:07:25.199 Verify: Yes 00:07:25.199 00:07:25.199 Running for 1 seconds... 00:07:25.199 00:07:25.199 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.199 ------------------------------------------------------------------------------------ 00:07:25.199 0,1 2176/s 89 MiB/s 0 0 00:07:25.199 0,0 2176/s 89 MiB/s 0 0 00:07:25.199 ==================================================================================== 00:07:25.199 Total 4352/s 461 MiB/s 0 0' 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.199 14:53:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.199 14:53:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.199 14:53:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.199 14:53:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.199 14:53:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.199 14:53:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.199 14:53:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.199 14:53:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.199 14:53:48 -- accel/accel.sh@42 -- # jq -r . 00:07:25.199 [2024-11-18 14:53:48.430490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.199 [2024-11-18 14:53:48.430622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71595 ] 00:07:25.199 [2024-11-18 14:53:48.577053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.199 [2024-11-18 14:53:48.617740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=0x1 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=decompress 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=software 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=32 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=32 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=2 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val=Yes 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.199 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.199 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.199 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@21 -- # val= 00:07:26.572 14:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:26.572 14:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:26.572 14:53:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.572 14:53:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.572 14:53:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.572 00:07:26.572 real 0m2.937s 00:07:26.572 user 0m2.410s 00:07:26.572 sys 0m0.319s 00:07:26.572 14:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.572 14:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.572 ************************************ 00:07:26.572 END TEST accel_deomp_full_mthread 00:07:26.572 ************************************ 00:07:26.572 14:53:49 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:26.572 14:53:49 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.572 14:53:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:26.572 14:53:49 -- accel/accel.sh@129 -- # build_accel_config 00:07:26.572 14:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.572 14:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.572 14:53:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.572 14:53:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.572 14:53:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.572 14:53:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.572 14:53:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.572 14:53:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.572 14:53:49 -- accel/accel.sh@42 -- # jq -r . 00:07:26.572 ************************************ 00:07:26.572 START TEST accel_dif_functional_tests 00:07:26.572 ************************************ 00:07:26.572 14:53:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.572 [2024-11-18 14:53:49.929902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.572 [2024-11-18 14:53:49.930021] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:07:26.572 [2024-11-18 14:53:50.079618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.572 [2024-11-18 14:53:50.123806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.572 [2024-11-18 14:53:50.123986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.572 [2024-11-18 14:53:50.124033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.891 00:07:26.891 00:07:26.891 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.891 http://cunit.sourceforge.net/ 00:07:26.891 00:07:26.891 00:07:26.891 Suite: accel_dif 00:07:26.891 Test: verify: DIF generated, GUARD check ...passed 00:07:26.891 Test: verify: DIF generated, APPTAG check ...passed 00:07:26.891 Test: verify: DIF generated, REFTAG check ...passed 00:07:26.891 Test: verify: DIF not generated, GUARD check ...passed 00:07:26.891 Test: verify: DIF not generated, APPTAG check ...passed 00:07:26.891 Test: verify: DIF not generated, REFTAG check ...passed 00:07:26.891 Test: verify: APPTAG correct, APPTAG check ...[2024-11-18 14:53:50.192834] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.891 [2024-11-18 14:53:50.192911] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.891 [2024-11-18 14:53:50.192965] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.891 [2024-11-18 14:53:50.193015] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.891 [2024-11-18 14:53:50.193043] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.891 [2024-11-18 14:53:50.193076] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.891 passed 00:07:26.891 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:26.891 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:26.891 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:26.891 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-18 14:53:50.193205] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:26.891 passed 00:07:26.891 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:26.891 Test: generate copy: DIF generated, GUARD check ...[2024-11-18 14:53:50.193429] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:26.891 passed 00:07:26.891 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:26.891 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:26.891 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:26.891 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:26.891 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:26.891 Test: generate copy: iovecs-len validate ...passed 00:07:26.891 Test: generate copy: buffer alignment validate ...[2024-11-18 14:53:50.193824] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:26.891 passed 00:07:26.891 00:07:26.891 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.891 suites 1 1 n/a 0 0 00:07:26.891 tests 20 20 20 0 0 00:07:26.891 asserts 204 204 204 0 n/a 00:07:26.891 00:07:26.891 Elapsed time = 0.003 seconds 00:07:26.891 00:07:26.891 real 0m0.505s 00:07:26.891 user 0m0.572s 00:07:26.891 sys 0m0.165s 00:07:26.891 ************************************ 00:07:26.891 END TEST accel_dif_functional_tests 00:07:26.891 ************************************ 00:07:26.892 14:53:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.892 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:26.892 00:07:26.892 real 0m59.561s 00:07:26.892 user 1m2.740s 00:07:26.892 sys 0m6.413s 00:07:26.892 14:53:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.892 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:26.892 ************************************ 00:07:26.892 END TEST accel 00:07:26.892 ************************************ 00:07:26.892 14:53:50 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:26.892 14:53:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.892 14:53:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.892 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:26.892 ************************************ 00:07:26.892 START TEST accel_rpc 00:07:26.892 ************************************ 00:07:26.892 14:53:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:27.150 * Looking for test storage... 00:07:27.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:27.150 14:53:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:27.150 14:53:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:27.150 14:53:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:27.150 14:53:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:27.150 14:53:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:27.150 14:53:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:27.150 14:53:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:27.150 14:53:50 -- scripts/common.sh@335 -- # IFS=.-: 00:07:27.150 14:53:50 -- scripts/common.sh@335 -- # read -ra ver1 00:07:27.150 14:53:50 -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.150 14:53:50 -- scripts/common.sh@336 -- # read -ra ver2 00:07:27.150 14:53:50 -- scripts/common.sh@337 -- # local 'op=<' 00:07:27.150 14:53:50 -- scripts/common.sh@339 -- # ver1_l=2 00:07:27.150 14:53:50 -- scripts/common.sh@340 -- # ver2_l=1 00:07:27.150 14:53:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:27.150 14:53:50 -- scripts/common.sh@343 -- # case "$op" in 00:07:27.150 14:53:50 -- scripts/common.sh@344 -- # : 1 00:07:27.150 14:53:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:27.150 14:53:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.150 14:53:50 -- scripts/common.sh@364 -- # decimal 1 00:07:27.150 14:53:50 -- scripts/common.sh@352 -- # local d=1 00:07:27.150 14:53:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.150 14:53:50 -- scripts/common.sh@354 -- # echo 1 00:07:27.150 14:53:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:27.150 14:53:50 -- scripts/common.sh@365 -- # decimal 2 00:07:27.150 14:53:50 -- scripts/common.sh@352 -- # local d=2 00:07:27.150 14:53:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.150 14:53:50 -- scripts/common.sh@354 -- # echo 2 00:07:27.150 14:53:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:27.150 14:53:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:27.150 14:53:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:27.150 14:53:50 -- scripts/common.sh@367 -- # return 0 00:07:27.150 14:53:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.150 14:53:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.150 --rc genhtml_branch_coverage=1 00:07:27.150 --rc genhtml_function_coverage=1 00:07:27.150 --rc genhtml_legend=1 00:07:27.150 --rc geninfo_all_blocks=1 00:07:27.150 --rc geninfo_unexecuted_blocks=1 00:07:27.150 00:07:27.150 ' 00:07:27.150 14:53:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.150 --rc genhtml_branch_coverage=1 00:07:27.150 --rc genhtml_function_coverage=1 00:07:27.150 --rc genhtml_legend=1 00:07:27.150 --rc geninfo_all_blocks=1 00:07:27.150 --rc geninfo_unexecuted_blocks=1 00:07:27.150 00:07:27.150 ' 00:07:27.150 14:53:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.150 --rc genhtml_branch_coverage=1 00:07:27.150 --rc genhtml_function_coverage=1 00:07:27.150 --rc genhtml_legend=1 00:07:27.150 --rc geninfo_all_blocks=1 00:07:27.150 --rc geninfo_unexecuted_blocks=1 00:07:27.150 00:07:27.151 ' 00:07:27.151 14:53:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:27.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.151 --rc genhtml_branch_coverage=1 00:07:27.151 --rc genhtml_function_coverage=1 00:07:27.151 --rc genhtml_legend=1 00:07:27.151 --rc geninfo_all_blocks=1 00:07:27.151 --rc geninfo_unexecuted_blocks=1 00:07:27.151 00:07:27.151 ' 00:07:27.151 14:53:50 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.151 14:53:50 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71704 00:07:27.151 14:53:50 -- accel/accel_rpc.sh@15 -- # waitforlisten 71704 00:07:27.151 14:53:50 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:27.151 14:53:50 -- common/autotest_common.sh@829 -- # '[' -z 71704 ']' 00:07:27.151 14:53:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.151 14:53:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.151 14:53:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.151 14:53:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.151 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:27.151 [2024-11-18 14:53:50.672675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.151 [2024-11-18 14:53:50.673224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71704 ] 00:07:27.409 [2024-11-18 14:53:50.817577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.409 [2024-11-18 14:53:50.857894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:27.409 [2024-11-18 14:53:50.858096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.977 14:53:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.977 14:53:51 -- common/autotest_common.sh@862 -- # return 0 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:27.977 14:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.977 14:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.977 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.977 ************************************ 00:07:27.977 START TEST accel_assign_opcode 00:07:27.977 ************************************ 00:07:27.977 14:53:51 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:27.977 14:53:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.977 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.977 [2024-11-18 14:53:51.510816] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:27.977 14:53:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:27.977 14:53:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.977 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:27.977 [2024-11-18 14:53:51.518806] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:27.977 14:53:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.977 14:53:51 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:27.977 14:53:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.977 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.235 14:53:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.235 14:53:51 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:28.235 14:53:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.235 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.235 14:53:51 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:28.235 14:53:51 -- accel/accel_rpc.sh@42 -- # grep software 00:07:28.235 14:53:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.235 software 00:07:28.235 00:07:28.235 real 0m0.224s 00:07:28.235 user 0m0.035s 00:07:28.235 sys 0m0.008s 00:07:28.235 ************************************ 00:07:28.235 END TEST accel_assign_opcode 00:07:28.235 ************************************ 00:07:28.235 14:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.235 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.235 14:53:51 -- accel/accel_rpc.sh@55 -- # killprocess 71704 00:07:28.235 14:53:51 -- common/autotest_common.sh@936 -- # '[' -z 71704 ']' 00:07:28.235 14:53:51 -- common/autotest_common.sh@940 -- # kill -0 71704 00:07:28.235 14:53:51 -- common/autotest_common.sh@941 -- # uname 00:07:28.235 14:53:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.235 14:53:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71704 00:07:28.235 killing process with pid 71704 00:07:28.235 14:53:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.235 14:53:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.235 14:53:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71704' 00:07:28.235 14:53:51 -- common/autotest_common.sh@955 -- # kill 71704 00:07:28.235 14:53:51 -- common/autotest_common.sh@960 -- # wait 71704 00:07:28.803 ************************************ 00:07:28.803 END TEST accel_rpc 00:07:28.803 ************************************ 00:07:28.803 00:07:28.803 real 0m1.640s 00:07:28.803 user 0m1.608s 00:07:28.803 sys 0m0.407s 00:07:28.803 14:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.803 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.803 14:53:52 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:28.803 14:53:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.803 14:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.803 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.803 ************************************ 00:07:28.803 START TEST app_cmdline 00:07:28.803 ************************************ 00:07:28.803 14:53:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:28.803 * Looking for test storage... 00:07:28.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:28.803 14:53:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:28.803 14:53:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:28.803 14:53:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:28.803 14:53:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:28.803 14:53:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:28.803 14:53:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:28.803 14:53:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:28.803 14:53:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:28.803 14:53:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:28.803 14:53:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.803 14:53:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:28.803 14:53:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:28.803 14:53:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:28.803 14:53:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:28.803 14:53:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:28.803 14:53:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:28.803 14:53:52 -- scripts/common.sh@344 -- # : 1 00:07:28.803 14:53:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:28.803 14:53:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.803 14:53:52 -- scripts/common.sh@364 -- # decimal 1 00:07:28.803 14:53:52 -- scripts/common.sh@352 -- # local d=1 00:07:28.803 14:53:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.803 14:53:52 -- scripts/common.sh@354 -- # echo 1 00:07:28.803 14:53:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:28.803 14:53:52 -- scripts/common.sh@365 -- # decimal 2 00:07:28.803 14:53:52 -- scripts/common.sh@352 -- # local d=2 00:07:28.803 14:53:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.803 14:53:52 -- scripts/common.sh@354 -- # echo 2 00:07:28.803 14:53:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:28.803 14:53:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:28.803 14:53:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:28.803 14:53:52 -- scripts/common.sh@367 -- # return 0 00:07:28.803 14:53:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.803 14:53:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:28.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.803 --rc genhtml_branch_coverage=1 00:07:28.803 --rc genhtml_function_coverage=1 00:07:28.803 --rc genhtml_legend=1 00:07:28.803 --rc geninfo_all_blocks=1 00:07:28.803 --rc geninfo_unexecuted_blocks=1 00:07:28.803 00:07:28.803 ' 00:07:28.803 14:53:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:28.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.803 --rc genhtml_branch_coverage=1 00:07:28.803 --rc genhtml_function_coverage=1 00:07:28.803 --rc genhtml_legend=1 00:07:28.803 --rc geninfo_all_blocks=1 00:07:28.803 --rc geninfo_unexecuted_blocks=1 00:07:28.803 00:07:28.803 ' 00:07:28.803 14:53:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:28.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.803 --rc genhtml_branch_coverage=1 00:07:28.803 --rc genhtml_function_coverage=1 00:07:28.804 --rc genhtml_legend=1 00:07:28.804 --rc geninfo_all_blocks=1 00:07:28.804 --rc geninfo_unexecuted_blocks=1 00:07:28.804 00:07:28.804 ' 00:07:28.804 14:53:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:28.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.804 --rc genhtml_branch_coverage=1 00:07:28.804 --rc genhtml_function_coverage=1 00:07:28.804 --rc genhtml_legend=1 00:07:28.804 --rc geninfo_all_blocks=1 00:07:28.804 --rc geninfo_unexecuted_blocks=1 00:07:28.804 00:07:28.804 ' 00:07:28.804 14:53:52 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.804 14:53:52 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71800 00:07:28.804 14:53:52 -- app/cmdline.sh@18 -- # waitforlisten 71800 00:07:28.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.804 14:53:52 -- common/autotest_common.sh@829 -- # '[' -z 71800 ']' 00:07:28.804 14:53:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.804 14:53:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.804 14:53:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.804 14:53:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.804 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.804 14:53:52 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.804 [2024-11-18 14:53:52.354247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.804 [2024-11-18 14:53:52.354397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71800 ] 00:07:29.063 [2024-11-18 14:53:52.498445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.063 [2024-11-18 14:53:52.539010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.063 [2024-11-18 14:53:52.539224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.628 14:53:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.628 14:53:53 -- common/autotest_common.sh@862 -- # return 0 00:07:29.628 14:53:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:29.886 { 00:07:29.886 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:29.886 "fields": { 00:07:29.886 "major": 24, 00:07:29.886 "minor": 1, 00:07:29.886 "patch": 1, 00:07:29.886 "suffix": "-pre", 00:07:29.886 "commit": "c13c99a5e" 00:07:29.886 } 00:07:29.886 } 00:07:29.886 14:53:53 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:29.886 14:53:53 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:29.886 14:53:53 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:29.886 14:53:53 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:29.886 14:53:53 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:29.886 14:53:53 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:29.886 14:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.886 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:29.886 14:53:53 -- app/cmdline.sh@26 -- # sort 00:07:29.886 14:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.886 14:53:53 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.886 14:53:53 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.886 14:53:53 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.886 14:53:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:29.886 14:53:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.886 14:53:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.886 14:53:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.886 14:53:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.886 14:53:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.886 14:53:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.886 14:53:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.886 14:53:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.886 14:53:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:29.886 14:53:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.144 request: 00:07:30.144 { 00:07:30.144 "method": "env_dpdk_get_mem_stats", 00:07:30.144 "req_id": 1 00:07:30.144 } 00:07:30.144 Got JSON-RPC error response 00:07:30.144 response: 00:07:30.144 { 00:07:30.144 "code": -32601, 00:07:30.144 "message": "Method not found" 00:07:30.144 } 00:07:30.144 14:53:53 -- common/autotest_common.sh@653 -- # es=1 00:07:30.144 14:53:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.144 14:53:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.144 14:53:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.144 14:53:53 -- app/cmdline.sh@1 -- # killprocess 71800 00:07:30.144 14:53:53 -- common/autotest_common.sh@936 -- # '[' -z 71800 ']' 00:07:30.144 14:53:53 -- common/autotest_common.sh@940 -- # kill -0 71800 00:07:30.144 14:53:53 -- common/autotest_common.sh@941 -- # uname 00:07:30.144 14:53:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.144 14:53:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71800 00:07:30.144 killing process with pid 71800 00:07:30.144 14:53:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:30.144 14:53:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:30.144 14:53:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71800' 00:07:30.144 14:53:53 -- common/autotest_common.sh@955 -- # kill 71800 00:07:30.144 14:53:53 -- common/autotest_common.sh@960 -- # wait 71800 00:07:30.403 ************************************ 00:07:30.403 END TEST app_cmdline 00:07:30.403 ************************************ 00:07:30.403 00:07:30.403 real 0m1.804s 00:07:30.403 user 0m2.073s 00:07:30.403 sys 0m0.436s 00:07:30.403 14:53:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.403 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.403 14:53:53 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:30.403 14:53:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.403 14:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.403 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.403 ************************************ 00:07:30.403 START TEST version 00:07:30.403 ************************************ 00:07:30.403 14:53:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:30.662 * Looking for test storage... 00:07:30.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:30.662 14:53:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:30.662 14:53:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:30.662 14:53:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:30.662 14:53:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:30.662 14:53:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:30.662 14:53:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:30.662 14:53:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:30.662 14:53:54 -- scripts/common.sh@335 -- # IFS=.-: 00:07:30.662 14:53:54 -- scripts/common.sh@335 -- # read -ra ver1 00:07:30.662 14:53:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.662 14:53:54 -- scripts/common.sh@336 -- # read -ra ver2 00:07:30.662 14:53:54 -- scripts/common.sh@337 -- # local 'op=<' 00:07:30.662 14:53:54 -- scripts/common.sh@339 -- # ver1_l=2 00:07:30.662 14:53:54 -- scripts/common.sh@340 -- # ver2_l=1 00:07:30.662 14:53:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:30.662 14:53:54 -- scripts/common.sh@343 -- # case "$op" in 00:07:30.662 14:53:54 -- scripts/common.sh@344 -- # : 1 00:07:30.662 14:53:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:30.662 14:53:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.662 14:53:54 -- scripts/common.sh@364 -- # decimal 1 00:07:30.662 14:53:54 -- scripts/common.sh@352 -- # local d=1 00:07:30.662 14:53:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.662 14:53:54 -- scripts/common.sh@354 -- # echo 1 00:07:30.662 14:53:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:30.662 14:53:54 -- scripts/common.sh@365 -- # decimal 2 00:07:30.662 14:53:54 -- scripts/common.sh@352 -- # local d=2 00:07:30.662 14:53:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.662 14:53:54 -- scripts/common.sh@354 -- # echo 2 00:07:30.662 14:53:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:30.662 14:53:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:30.662 14:53:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:30.662 14:53:54 -- scripts/common.sh@367 -- # return 0 00:07:30.662 14:53:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.662 14:53:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:30.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.662 --rc genhtml_branch_coverage=1 00:07:30.662 --rc genhtml_function_coverage=1 00:07:30.662 --rc genhtml_legend=1 00:07:30.662 --rc geninfo_all_blocks=1 00:07:30.662 --rc geninfo_unexecuted_blocks=1 00:07:30.662 00:07:30.662 ' 00:07:30.662 14:53:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:30.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.662 --rc genhtml_branch_coverage=1 00:07:30.662 --rc genhtml_function_coverage=1 00:07:30.662 --rc genhtml_legend=1 00:07:30.662 --rc geninfo_all_blocks=1 00:07:30.662 --rc geninfo_unexecuted_blocks=1 00:07:30.662 00:07:30.662 ' 00:07:30.662 14:53:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:30.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.662 --rc genhtml_branch_coverage=1 00:07:30.662 --rc genhtml_function_coverage=1 00:07:30.662 --rc genhtml_legend=1 00:07:30.662 --rc geninfo_all_blocks=1 00:07:30.662 --rc geninfo_unexecuted_blocks=1 00:07:30.662 00:07:30.662 ' 00:07:30.662 14:53:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:30.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.662 --rc genhtml_branch_coverage=1 00:07:30.662 --rc genhtml_function_coverage=1 00:07:30.662 --rc genhtml_legend=1 00:07:30.662 --rc geninfo_all_blocks=1 00:07:30.662 --rc geninfo_unexecuted_blocks=1 00:07:30.662 00:07:30.662 ' 00:07:30.662 14:53:54 -- app/version.sh@17 -- # get_header_version major 00:07:30.662 14:53:54 -- app/version.sh@14 -- # cut -f2 00:07:30.662 14:53:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.663 14:53:54 -- app/version.sh@14 -- # tr -d '"' 00:07:30.663 14:53:54 -- app/version.sh@17 -- # major=24 00:07:30.663 14:53:54 -- app/version.sh@18 -- # get_header_version minor 00:07:30.663 14:53:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.663 14:53:54 -- app/version.sh@14 -- # tr -d '"' 00:07:30.663 14:53:54 -- app/version.sh@14 -- # cut -f2 00:07:30.663 14:53:54 -- app/version.sh@18 -- # minor=1 00:07:30.663 14:53:54 -- app/version.sh@19 -- # get_header_version patch 00:07:30.663 14:53:54 -- app/version.sh@14 -- # cut -f2 00:07:30.663 14:53:54 -- app/version.sh@14 -- # tr -d '"' 00:07:30.663 14:53:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.663 14:53:54 -- app/version.sh@19 -- # patch=1 00:07:30.663 14:53:54 -- app/version.sh@20 -- # get_header_version suffix 00:07:30.663 14:53:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:30.663 14:53:54 -- app/version.sh@14 -- # cut -f2 00:07:30.663 14:53:54 -- app/version.sh@14 -- # tr -d '"' 00:07:30.663 14:53:54 -- app/version.sh@20 -- # suffix=-pre 00:07:30.663 14:53:54 -- app/version.sh@22 -- # version=24.1 00:07:30.663 14:53:54 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.663 14:53:54 -- app/version.sh@25 -- # version=24.1.1 00:07:30.663 14:53:54 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:30.663 14:53:54 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:30.663 14:53:54 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.663 14:53:54 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:30.663 14:53:54 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:30.663 ************************************ 00:07:30.663 END TEST version 00:07:30.663 ************************************ 00:07:30.663 00:07:30.663 real 0m0.195s 00:07:30.663 user 0m0.119s 00:07:30.663 sys 0m0.102s 00:07:30.663 14:53:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.663 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.663 14:53:54 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:30.663 14:53:54 -- spdk/autotest.sh@191 -- # uname -s 00:07:30.663 14:53:54 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:30.663 14:53:54 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:30.663 14:53:54 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:30.663 14:53:54 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:30.663 14:53:54 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:30.663 14:53:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:30.663 14:53:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.663 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.663 ************************************ 00:07:30.663 START TEST blockdev_nvme 00:07:30.663 ************************************ 00:07:30.663 14:53:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:30.923 * Looking for test storage... 00:07:30.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:30.923 14:53:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:30.923 14:53:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:30.923 14:53:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:30.923 14:53:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:30.923 14:53:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:30.923 14:53:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:30.923 14:53:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:30.923 14:53:54 -- scripts/common.sh@335 -- # IFS=.-: 00:07:30.923 14:53:54 -- scripts/common.sh@335 -- # read -ra ver1 00:07:30.923 14:53:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.923 14:53:54 -- scripts/common.sh@336 -- # read -ra ver2 00:07:30.923 14:53:54 -- scripts/common.sh@337 -- # local 'op=<' 00:07:30.923 14:53:54 -- scripts/common.sh@339 -- # ver1_l=2 00:07:30.923 14:53:54 -- scripts/common.sh@340 -- # ver2_l=1 00:07:30.923 14:53:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:30.923 14:53:54 -- scripts/common.sh@343 -- # case "$op" in 00:07:30.923 14:53:54 -- scripts/common.sh@344 -- # : 1 00:07:30.923 14:53:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:30.923 14:53:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.923 14:53:54 -- scripts/common.sh@364 -- # decimal 1 00:07:30.923 14:53:54 -- scripts/common.sh@352 -- # local d=1 00:07:30.923 14:53:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.923 14:53:54 -- scripts/common.sh@354 -- # echo 1 00:07:30.923 14:53:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:30.923 14:53:54 -- scripts/common.sh@365 -- # decimal 2 00:07:30.923 14:53:54 -- scripts/common.sh@352 -- # local d=2 00:07:30.923 14:53:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.923 14:53:54 -- scripts/common.sh@354 -- # echo 2 00:07:30.923 14:53:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:30.923 14:53:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:30.923 14:53:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:30.923 14:53:54 -- scripts/common.sh@367 -- # return 0 00:07:30.923 14:53:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.923 14:53:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:30.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.923 --rc genhtml_branch_coverage=1 00:07:30.923 --rc genhtml_function_coverage=1 00:07:30.923 --rc genhtml_legend=1 00:07:30.923 --rc geninfo_all_blocks=1 00:07:30.923 --rc geninfo_unexecuted_blocks=1 00:07:30.923 00:07:30.923 ' 00:07:30.923 14:53:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:30.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.923 --rc genhtml_branch_coverage=1 00:07:30.923 --rc genhtml_function_coverage=1 00:07:30.923 --rc genhtml_legend=1 00:07:30.923 --rc geninfo_all_blocks=1 00:07:30.923 --rc geninfo_unexecuted_blocks=1 00:07:30.923 00:07:30.923 ' 00:07:30.923 14:53:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:30.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.923 --rc genhtml_branch_coverage=1 00:07:30.923 --rc genhtml_function_coverage=1 00:07:30.923 --rc genhtml_legend=1 00:07:30.923 --rc geninfo_all_blocks=1 00:07:30.923 --rc geninfo_unexecuted_blocks=1 00:07:30.923 00:07:30.923 ' 00:07:30.923 14:53:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:30.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.923 --rc genhtml_branch_coverage=1 00:07:30.923 --rc genhtml_function_coverage=1 00:07:30.923 --rc genhtml_legend=1 00:07:30.923 --rc geninfo_all_blocks=1 00:07:30.923 --rc geninfo_unexecuted_blocks=1 00:07:30.923 00:07:30.923 ' 00:07:30.923 14:53:54 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:30.923 14:53:54 -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.923 14:53:54 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:30.923 14:53:54 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.923 14:53:54 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:30.923 14:53:54 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:30.923 14:53:54 -- bdev/blockdev.sh@18 -- # : 00:07:30.923 14:53:54 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:30.923 14:53:54 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:30.923 14:53:54 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:30.923 14:53:54 -- bdev/blockdev.sh@672 -- # uname -s 00:07:30.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.923 14:53:54 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:30.923 14:53:54 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:30.923 14:53:54 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:30.923 14:53:54 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:30.923 14:53:54 -- bdev/blockdev.sh@682 -- # dek= 00:07:30.923 14:53:54 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:30.923 14:53:54 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:30.923 14:53:54 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:30.923 14:53:54 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:30.923 14:53:54 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:30.923 14:53:54 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:30.923 14:53:54 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=71964 00:07:30.923 14:53:54 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.923 14:53:54 -- bdev/blockdev.sh@47 -- # waitforlisten 71964 00:07:30.923 14:53:54 -- common/autotest_common.sh@829 -- # '[' -z 71964 ']' 00:07:30.923 14:53:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.923 14:53:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.923 14:53:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.923 14:53:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.923 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.923 14:53:54 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.923 [2024-11-18 14:53:54.456654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.923 [2024-11-18 14:53:54.456906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71964 ] 00:07:31.182 [2024-11-18 14:53:54.601760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.182 [2024-11-18 14:53:54.643098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.182 [2024-11-18 14:53:54.643331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.748 14:53:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.748 14:53:55 -- common/autotest_common.sh@862 -- # return 0 00:07:31.748 14:53:55 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:31.748 14:53:55 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:31.748 14:53:55 -- bdev/blockdev.sh@79 -- # local json 00:07:31.748 14:53:55 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:31.748 14:53:55 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:32.005 14:53:55 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:32.005 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.005 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.005 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.005 14:53:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:32.005 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.005 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.264 14:53:55 -- bdev/blockdev.sh@738 -- # cat 00:07:32.264 14:53:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:32.264 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.264 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.264 14:53:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:32.264 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.264 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.264 14:53:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:32.264 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.264 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.264 14:53:55 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:32.264 14:53:55 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:32.264 14:53:55 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:32.264 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.264 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.264 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.264 14:53:55 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:32.264 14:53:55 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:32.265 14:53:55 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9bf5bc55-a7c1-43a7-9f33-4c74511a1655"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9bf5bc55-a7c1-43a7-9f33-4c74511a1655",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "0892c3e5-c766-4924-8b6d-342cc0f9305d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0892c3e5-c766-4924-8b6d-342cc0f9305d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c2d8a04b-0352-40da-a5a4-8ae6065fff96"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2d8a04b-0352-40da-a5a4-8ae6065fff96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5432ef5d-36d5-469f-9630-5c589f8aa90b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5432ef5d-36d5-469f-9630-5c589f8aa90b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "eeaa7f3d-0fe8-422d-ac07-7fde798f4064"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eeaa7f3d-0fe8-422d-ac07-7fde798f4064",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bf67913a-35e3-4fe9-ab7d-9e24318ce14c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bf67913a-35e3-4fe9-ab7d-9e24318ce14c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:32.265 14:53:55 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:32.265 14:53:55 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:32.265 14:53:55 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:32.265 14:53:55 -- bdev/blockdev.sh@752 -- # killprocess 71964 00:07:32.265 14:53:55 -- common/autotest_common.sh@936 -- # '[' -z 71964 ']' 00:07:32.265 14:53:55 -- common/autotest_common.sh@940 -- # kill -0 71964 00:07:32.265 14:53:55 -- common/autotest_common.sh@941 -- # uname 00:07:32.265 14:53:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.265 14:53:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71964 00:07:32.265 killing process with pid 71964 00:07:32.265 14:53:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.265 14:53:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.265 14:53:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71964' 00:07:32.265 14:53:55 -- common/autotest_common.sh@955 -- # kill 71964 00:07:32.265 14:53:55 -- common/autotest_common.sh@960 -- # wait 71964 00:07:32.523 14:53:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:32.523 14:53:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.523 14:53:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.523 14:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.523 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:32.523 ************************************ 00:07:32.523 START TEST bdev_hello_world 00:07:32.523 ************************************ 00:07:32.523 14:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:32.782 [2024-11-18 14:53:56.145185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.782 [2024-11-18 14:53:56.145467] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72031 ] 00:07:32.782 [2024-11-18 14:53:56.292456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.782 [2024-11-18 14:53:56.332971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.350 [2024-11-18 14:53:56.692846] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:33.350 [2024-11-18 14:53:56.692905] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:33.350 [2024-11-18 14:53:56.692928] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:33.350 [2024-11-18 14:53:56.695007] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:33.350 [2024-11-18 14:53:56.695639] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:33.350 [2024-11-18 14:53:56.695672] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:33.350 [2024-11-18 14:53:56.695919] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:33.350 00:07:33.350 [2024-11-18 14:53:56.695943] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:33.350 00:07:33.350 real 0m0.799s 00:07:33.350 user 0m0.508s 00:07:33.350 sys 0m0.189s 00:07:33.350 14:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.350 ************************************ 00:07:33.350 END TEST bdev_hello_world 00:07:33.350 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.350 ************************************ 00:07:33.350 14:53:56 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:33.350 14:53:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.350 14:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.350 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.609 ************************************ 00:07:33.609 START TEST bdev_bounds 00:07:33.609 ************************************ 00:07:33.609 14:53:56 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:33.609 14:53:56 -- bdev/blockdev.sh@288 -- # bdevio_pid=72058 00:07:33.609 14:53:56 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.609 14:53:56 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 72058' 00:07:33.609 Process bdevio pid: 72058 00:07:33.609 14:53:56 -- bdev/blockdev.sh@291 -- # waitforlisten 72058 00:07:33.609 14:53:56 -- common/autotest_common.sh@829 -- # '[' -z 72058 ']' 00:07:33.609 14:53:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.609 14:53:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.609 14:53:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.609 14:53:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.609 14:53:56 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:33.609 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.609 [2024-11-18 14:53:56.996866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.609 [2024-11-18 14:53:56.997113] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72058 ] 00:07:33.609 [2024-11-18 14:53:57.136933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.609 [2024-11-18 14:53:57.179433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.609 [2024-11-18 14:53:57.179647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.609 [2024-11-18 14:53:57.179684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.544 14:53:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.544 14:53:57 -- common/autotest_common.sh@862 -- # return 0 00:07:34.544 14:53:57 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:34.544 I/O targets: 00:07:34.544 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:34.544 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:34.544 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.544 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.544 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:34.544 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:34.544 00:07:34.544 00:07:34.544 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.544 http://cunit.sourceforge.net/ 00:07:34.544 00:07:34.544 00:07:34.544 Suite: bdevio tests on: Nvme3n1 00:07:34.544 Test: blockdev write read block ...passed 00:07:34.544 Test: blockdev write zeroes read block ...passed 00:07:34.544 Test: blockdev write zeroes read no split ...passed 00:07:34.544 Test: blockdev write zeroes read split ...passed 00:07:34.544 Test: blockdev write zeroes read split partial ...passed 00:07:34.544 Test: blockdev reset ...[2024-11-18 14:53:57.913153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:34.544 passed 00:07:34.544 Test: blockdev write read 8 blocks ...[2024-11-18 14:53:57.916568] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.544 passed 00:07:34.544 Test: blockdev write read size > 128k ...passed 00:07:34.544 Test: blockdev write read invalid size ...passed 00:07:34.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.544 Test: blockdev write read max offset ...passed 00:07:34.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.544 Test: blockdev writev readv 8 blocks ...passed 00:07:34.544 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.544 Test: blockdev writev readv block ...passed 00:07:34.544 Test: blockdev writev readv size > 128k ...passed 00:07:34.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.544 Test: blockdev comparev and writev ...[2024-11-18 14:53:57.934588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:34.544 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bda0e000 len:0x1000 00:07:34.544 [2024-11-18 14:53:57.934746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.544 passed 00:07:34.544 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.544 Test: blockdev nvme admin passthru ...[2024-11-18 14:53:57.937060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.544 [2024-11-18 14:53:57.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.544 passed 00:07:34.544 Test: blockdev copy ...passed 00:07:34.544 Suite: bdevio tests on: Nvme2n3 00:07:34.544 Test: blockdev write read block ...passed 00:07:34.544 Test: blockdev write zeroes read block ...passed 00:07:34.544 Test: blockdev write zeroes read no split ...passed 00:07:34.544 Test: blockdev write zeroes read split ...passed 00:07:34.544 Test: blockdev write zeroes read split partial ...passed 00:07:34.544 Test: blockdev reset ...[2024-11-18 14:53:57.964787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:34.544 [2024-11-18 14:53:57.969996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.544 passed 00:07:34.544 Test: blockdev write read 8 blocks ...passed 00:07:34.544 Test: blockdev write read size > 128k ...passed 00:07:34.544 Test: blockdev write read invalid size ...passed 00:07:34.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.544 Test: blockdev write read max offset ...passed 00:07:34.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.544 Test: blockdev writev readv 8 blocks ...passed 00:07:34.544 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.544 Test: blockdev writev readv block ...passed 00:07:34.544 Test: blockdev writev readv size > 128k ...passed 00:07:34.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.544 Test: blockdev comparev and writev ...[2024-11-18 14:53:57.990025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bda08000 len:0x1000 00:07:34.544 [2024-11-18 14:53:57.990154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.544 passed 00:07:34.544 Test: blockdev nvme passthru rw ...passed 00:07:34.544 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.544 Test: blockdev nvme admin passthru ...[2024-11-18 14:53:57.991946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.545 [2024-11-18 14:53:57.991980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev copy ...passed 00:07:34.545 Suite: bdevio tests on: Nvme2n2 00:07:34.545 Test: blockdev write read block ...passed 00:07:34.545 Test: blockdev write zeroes read block ...passed 00:07:34.545 Test: blockdev write zeroes read no split ...passed 00:07:34.545 Test: blockdev write zeroes read split ...passed 00:07:34.545 Test: blockdev write zeroes read split partial ...passed 00:07:34.545 Test: blockdev reset ...[2024-11-18 14:53:58.014594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:34.545 [2024-11-18 14:53:58.016592] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.545 passed 00:07:34.545 Test: blockdev write read 8 blocks ...passed 00:07:34.545 Test: blockdev write read size > 128k ...passed 00:07:34.545 Test: blockdev write read invalid size ...passed 00:07:34.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.545 Test: blockdev write read max offset ...passed 00:07:34.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.545 Test: blockdev writev readv 8 blocks ...passed 00:07:34.545 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.545 Test: blockdev writev readv block ...passed 00:07:34.545 Test: blockdev writev readv size > 128k ...passed 00:07:34.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.545 Test: blockdev comparev and writev ...[2024-11-18 14:53:58.030345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bda04000 len:0x1000 00:07:34.545 [2024-11-18 14:53:58.030384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev nvme passthru rw ...passed 00:07:34.545 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:53:58.032633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:34.545 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:34.545 [2024-11-18 14:53:58.032760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev copy ...passed 00:07:34.545 Suite: bdevio tests on: Nvme2n1 00:07:34.545 Test: blockdev write read block ...passed 00:07:34.545 Test: blockdev write zeroes read block ...passed 00:07:34.545 Test: blockdev write zeroes read no split ...passed 00:07:34.545 Test: blockdev write zeroes read split ...passed 00:07:34.545 Test: blockdev write zeroes read split partial ...passed 00:07:34.545 Test: blockdev reset ...[2024-11-18 14:53:58.053258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:34.545 passed 00:07:34.545 Test: blockdev write read 8 blocks ...[2024-11-18 14:53:58.056920] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.545 passed 00:07:34.545 Test: blockdev write read size > 128k ...passed 00:07:34.545 Test: blockdev write read invalid size ...passed 00:07:34.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.545 Test: blockdev write read max offset ...passed 00:07:34.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.545 Test: blockdev writev readv 8 blocks ...passed 00:07:34.545 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.545 Test: blockdev writev readv block ...passed 00:07:34.545 Test: blockdev writev readv size > 128k ...passed 00:07:34.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.545 Test: blockdev comparev and writev ...[2024-11-18 14:53:58.070616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bda04000 len:0x1000 00:07:34.545 [2024-11-18 14:53:58.070654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev nvme passthru rw ...passed 00:07:34.545 Test: blockdev nvme passthru vendor specific ...passed 00:07:34.545 Test: blockdev nvme admin passthru ...[2024-11-18 14:53:58.072405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.545 [2024-11-18 14:53:58.072433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev copy ...passed 00:07:34.545 Suite: bdevio tests on: Nvme1n1 00:07:34.545 Test: blockdev write read block ...passed 00:07:34.545 Test: blockdev write zeroes read block ...passed 00:07:34.545 Test: blockdev write zeroes read no split ...passed 00:07:34.545 Test: blockdev write zeroes read split ...passed 00:07:34.545 Test: blockdev write zeroes read split partial ...passed 00:07:34.545 Test: blockdev reset ...[2024-11-18 14:53:58.091027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:34.545 passed 00:07:34.545 Test: blockdev write read 8 blocks ...[2024-11-18 14:53:58.094187] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.545 passed 00:07:34.545 Test: blockdev write read size > 128k ...passed 00:07:34.545 Test: blockdev write read invalid size ...passed 00:07:34.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.545 Test: blockdev write read max offset ...passed 00:07:34.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.545 Test: blockdev writev readv 8 blocks ...passed 00:07:34.545 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.545 Test: blockdev writev readv block ...passed 00:07:34.545 Test: blockdev writev readv size > 128k ...passed 00:07:34.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.545 Test: blockdev comparev and writev ...[2024-11-18 14:53:58.107515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:34.545 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c980e000 len:0x1000 00:07:34.545 [2024-11-18 14:53:58.107623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:53:58.109539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:34.545 [2024-11-18 14:53:58.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:34.545 passed 00:07:34.545 Test: blockdev nvme admin passthru ...passed 00:07:34.545 Test: blockdev copy ...passed 00:07:34.545 Suite: bdevio tests on: Nvme0n1 00:07:34.545 Test: blockdev write read block ...passed 00:07:34.545 Test: blockdev write zeroes read block ...passed 00:07:34.545 Test: blockdev write zeroes read no split ...passed 00:07:34.545 Test: blockdev write zeroes read split ...passed 00:07:34.803 Test: blockdev write zeroes read split partial ...passed 00:07:34.803 Test: blockdev reset ...[2024-11-18 14:53:58.131535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:34.803 passed 00:07:34.803 Test: blockdev write read 8 blocks ...[2024-11-18 14:53:58.134295] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.803 passed 00:07:34.803 Test: blockdev write read size > 128k ...passed 00:07:34.803 Test: blockdev write read invalid size ...passed 00:07:34.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.803 Test: blockdev write read max offset ...passed 00:07:34.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.803 Test: blockdev writev readv 8 blocks ...passed 00:07:34.803 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.803 Test: blockdev writev readv block ...passed 00:07:34.803 Test: blockdev writev readv size > 128k ...passed 00:07:34.803 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.803 Test: blockdev comparev and writev ...passed 00:07:34.803 Test: blockdev nvme passthru rw ...[2024-11-18 14:53:58.146597] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:34.803 separate metadata which is not supported yet. 00:07:34.803 passed 00:07:34.803 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:53:58.147903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:34.803 [2024-11-18 14:53:58.147939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:34.803 passed 00:07:34.803 Test: blockdev nvme admin passthru ...passed 00:07:34.803 Test: blockdev copy ...passed 00:07:34.803 00:07:34.803 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.803 suites 6 6 n/a 0 0 00:07:34.803 tests 138 138 138 0 0 00:07:34.803 asserts 893 893 893 0 n/a 00:07:34.803 00:07:34.803 Elapsed time = 0.561 seconds 00:07:34.803 0 00:07:34.803 14:53:58 -- bdev/blockdev.sh@293 -- # killprocess 72058 00:07:34.803 14:53:58 -- common/autotest_common.sh@936 -- # '[' -z 72058 ']' 00:07:34.803 14:53:58 -- common/autotest_common.sh@940 -- # kill -0 72058 00:07:34.803 14:53:58 -- common/autotest_common.sh@941 -- # uname 00:07:34.803 14:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.803 14:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72058 00:07:34.803 14:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.803 14:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.803 14:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72058' 00:07:34.803 killing process with pid 72058 00:07:34.803 14:53:58 -- common/autotest_common.sh@955 -- # kill 72058 00:07:34.803 14:53:58 -- common/autotest_common.sh@960 -- # wait 72058 00:07:34.803 14:53:58 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:34.803 00:07:34.803 real 0m1.415s 00:07:34.803 user 0m3.462s 00:07:34.803 sys 0m0.263s 00:07:34.803 14:53:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.803 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.803 ************************************ 00:07:34.803 END TEST bdev_bounds 00:07:34.803 ************************************ 00:07:35.062 14:53:58 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:35.062 14:53:58 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:35.062 14:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.062 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.062 ************************************ 00:07:35.062 START TEST bdev_nbd 00:07:35.062 ************************************ 00:07:35.062 14:53:58 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:35.062 14:53:58 -- bdev/blockdev.sh@298 -- # uname -s 00:07:35.062 14:53:58 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:35.062 14:53:58 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.062 14:53:58 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.062 14:53:58 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.062 14:53:58 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:35.062 14:53:58 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:35.062 14:53:58 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:35.062 14:53:58 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:35.062 14:53:58 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:35.062 14:53:58 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:35.062 14:53:58 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:35.062 14:53:58 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:35.062 14:53:58 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.062 14:53:58 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:35.062 14:53:58 -- bdev/blockdev.sh@316 -- # nbd_pid=72111 00:07:35.062 14:53:58 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:35.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.062 14:53:58 -- bdev/blockdev.sh@318 -- # waitforlisten 72111 /var/tmp/spdk-nbd.sock 00:07:35.062 14:53:58 -- common/autotest_common.sh@829 -- # '[' -z 72111 ']' 00:07:35.062 14:53:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.062 14:53:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.062 14:53:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.062 14:53:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.062 14:53:58 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:35.062 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.062 [2024-11-18 14:53:58.464445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.062 [2024-11-18 14:53:58.464695] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.062 [2024-11-18 14:53:58.612239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.320 [2024-11-18 14:53:58.652051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.899 14:53:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.899 14:53:59 -- common/autotest_common.sh@862 -- # return 0 00:07:35.899 14:53:59 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@24 -- # local i 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.899 14:53:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:36.157 14:53:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:36.157 14:53:59 -- common/autotest_common.sh@867 -- # local i 00:07:36.157 14:53:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:36.157 14:53:59 -- common/autotest_common.sh@871 -- # break 00:07:36.157 14:53:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.157 1+0 records in 00:07:36.157 1+0 records out 00:07:36.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114384 s, 3.6 MB/s 00:07:36.157 14:53:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 14:53:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.157 14:53:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 14:53:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.157 14:53:59 -- common/autotest_common.sh@887 -- # return 0 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:36.157 14:53:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:36.157 14:53:59 -- common/autotest_common.sh@867 -- # local i 00:07:36.157 14:53:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:36.157 14:53:59 -- common/autotest_common.sh@871 -- # break 00:07:36.157 14:53:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.157 14:53:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.157 1+0 records in 00:07:36.157 1+0 records out 00:07:36.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466304 s, 8.8 MB/s 00:07:36.157 14:53:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 14:53:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.157 14:53:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.157 14:53:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.157 14:53:59 -- common/autotest_common.sh@887 -- # return 0 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.157 14:53:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:36.416 14:53:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:36.416 14:53:59 -- common/autotest_common.sh@867 -- # local i 00:07:36.416 14:53:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.416 14:53:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.416 14:53:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:36.416 14:53:59 -- common/autotest_common.sh@871 -- # break 00:07:36.416 14:53:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.416 14:53:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.416 14:53:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.416 1+0 records in 00:07:36.416 1+0 records out 00:07:36.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499139 s, 8.2 MB/s 00:07:36.416 14:53:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.416 14:53:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.416 14:53:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.416 14:53:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.416 14:53:59 -- common/autotest_common.sh@887 -- # return 0 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.416 14:53:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:36.677 14:54:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:36.677 14:54:00 -- common/autotest_common.sh@867 -- # local i 00:07:36.677 14:54:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.677 14:54:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.677 14:54:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:36.677 14:54:00 -- common/autotest_common.sh@871 -- # break 00:07:36.677 14:54:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.677 14:54:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.677 14:54:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.677 1+0 records in 00:07:36.677 1+0 records out 00:07:36.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357574 s, 11.5 MB/s 00:07:36.677 14:54:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.677 14:54:00 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.677 14:54:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.677 14:54:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.677 14:54:00 -- common/autotest_common.sh@887 -- # return 0 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.677 14:54:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:36.938 14:54:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:36.938 14:54:00 -- common/autotest_common.sh@867 -- # local i 00:07:36.938 14:54:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.938 14:54:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.938 14:54:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:36.938 14:54:00 -- common/autotest_common.sh@871 -- # break 00:07:36.938 14:54:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.938 14:54:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.938 14:54:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.938 1+0 records in 00:07:36.938 1+0 records out 00:07:36.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609849 s, 6.7 MB/s 00:07:36.938 14:54:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.938 14:54:00 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.938 14:54:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.938 14:54:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.938 14:54:00 -- common/autotest_common.sh@887 -- # return 0 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.938 14:54:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:37.199 14:54:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:37.200 14:54:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:37.200 14:54:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:37.200 14:54:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:37.200 14:54:00 -- common/autotest_common.sh@867 -- # local i 00:07:37.200 14:54:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.200 14:54:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.200 14:54:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:37.200 14:54:00 -- common/autotest_common.sh@871 -- # break 00:07:37.200 14:54:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.200 14:54:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.200 14:54:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.200 1+0 records in 00:07:37.200 1+0 records out 00:07:37.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011448 s, 3.6 MB/s 00:07:37.200 14:54:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.200 14:54:00 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.200 14:54:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.200 14:54:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.200 14:54:00 -- common/autotest_common.sh@887 -- # return 0 00:07:37.200 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.200 14:54:00 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.200 14:54:00 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.461 14:54:00 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd0", 00:07:37.461 "bdev_name": "Nvme0n1" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd1", 00:07:37.461 "bdev_name": "Nvme1n1" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd2", 00:07:37.461 "bdev_name": "Nvme2n1" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd3", 00:07:37.461 "bdev_name": "Nvme2n2" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd4", 00:07:37.461 "bdev_name": "Nvme2n3" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd5", 00:07:37.461 "bdev_name": "Nvme3n1" 00:07:37.461 } 00:07:37.461 ]' 00:07:37.461 14:54:00 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:37.461 14:54:00 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:37.461 14:54:00 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd0", 00:07:37.461 "bdev_name": "Nvme0n1" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd1", 00:07:37.461 "bdev_name": "Nvme1n1" 00:07:37.461 }, 00:07:37.461 { 00:07:37.461 "nbd_device": "/dev/nbd2", 00:07:37.462 "bdev_name": "Nvme2n1" 00:07:37.462 }, 00:07:37.462 { 00:07:37.462 "nbd_device": "/dev/nbd3", 00:07:37.462 "bdev_name": "Nvme2n2" 00:07:37.462 }, 00:07:37.462 { 00:07:37.462 "nbd_device": "/dev/nbd4", 00:07:37.462 "bdev_name": "Nvme2n3" 00:07:37.462 }, 00:07:37.462 { 00:07:37.462 "nbd_device": "/dev/nbd5", 00:07:37.462 "bdev_name": "Nvme3n1" 00:07:37.462 } 00:07:37.462 ]' 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@51 -- # local i 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.462 14:54:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@41 -- # break 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.722 14:54:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@41 -- # break 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@41 -- # break 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.984 14:54:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@41 -- # break 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.247 14:54:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.507 14:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:38.507 14:54:02 -- bdev/nbd_common.sh@41 -- # break 00:07:38.507 14:54:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.507 14:54:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.507 14:54:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@41 -- # break 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.769 14:54:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@65 -- # true 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@122 -- # count=0 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@127 -- # return 0 00:07:39.029 14:54:02 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@12 -- # local i 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.029 14:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:39.287 /dev/nbd0 00:07:39.287 14:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.287 14:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.287 14:54:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:39.287 14:54:02 -- common/autotest_common.sh@867 -- # local i 00:07:39.287 14:54:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.287 14:54:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.287 14:54:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:39.287 14:54:02 -- common/autotest_common.sh@871 -- # break 00:07:39.287 14:54:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.287 14:54:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.287 14:54:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.287 1+0 records in 00:07:39.287 1+0 records out 00:07:39.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895796 s, 4.6 MB/s 00:07:39.287 14:54:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.287 14:54:02 -- common/autotest_common.sh@884 -- # size=4096 00:07:39.287 14:54:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.287 14:54:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.287 14:54:02 -- common/autotest_common.sh@887 -- # return 0 00:07:39.287 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.287 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.287 14:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:39.544 /dev/nbd1 00:07:39.544 14:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.544 14:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.544 14:54:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:39.544 14:54:02 -- common/autotest_common.sh@867 -- # local i 00:07:39.544 14:54:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.544 14:54:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.544 14:54:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:39.544 14:54:02 -- common/autotest_common.sh@871 -- # break 00:07:39.544 14:54:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.544 14:54:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.544 14:54:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.544 1+0 records in 00:07:39.544 1+0 records out 00:07:39.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776546 s, 5.3 MB/s 00:07:39.544 14:54:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.544 14:54:02 -- common/autotest_common.sh@884 -- # size=4096 00:07:39.544 14:54:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.544 14:54:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.544 14:54:02 -- common/autotest_common.sh@887 -- # return 0 00:07:39.544 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.544 14:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.544 14:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:39.544 /dev/nbd10 00:07:39.544 14:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:39.802 14:54:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:39.802 14:54:03 -- common/autotest_common.sh@867 -- # local i 00:07:39.802 14:54:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:39.802 14:54:03 -- common/autotest_common.sh@871 -- # break 00:07:39.802 14:54:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.802 1+0 records in 00:07:39.802 1+0 records out 00:07:39.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454186 s, 9.0 MB/s 00:07:39.802 14:54:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.802 14:54:03 -- common/autotest_common.sh@884 -- # size=4096 00:07:39.802 14:54:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.802 14:54:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.802 14:54:03 -- common/autotest_common.sh@887 -- # return 0 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:39.802 /dev/nbd11 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:39.802 14:54:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:39.802 14:54:03 -- common/autotest_common.sh@867 -- # local i 00:07:39.802 14:54:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:39.802 14:54:03 -- common/autotest_common.sh@871 -- # break 00:07:39.802 14:54:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.802 14:54:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.802 1+0 records in 00:07:39.802 1+0 records out 00:07:39.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614014 s, 6.7 MB/s 00:07:39.802 14:54:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.802 14:54:03 -- common/autotest_common.sh@884 -- # size=4096 00:07:39.802 14:54:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.802 14:54:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.802 14:54:03 -- common/autotest_common.sh@887 -- # return 0 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.802 14:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:40.059 /dev/nbd12 00:07:40.059 14:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:40.059 14:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:40.059 14:54:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:40.059 14:54:03 -- common/autotest_common.sh@867 -- # local i 00:07:40.059 14:54:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.059 14:54:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.059 14:54:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:40.059 14:54:03 -- common/autotest_common.sh@871 -- # break 00:07:40.059 14:54:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.059 14:54:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.060 14:54:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.060 1+0 records in 00:07:40.060 1+0 records out 00:07:40.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392164 s, 10.4 MB/s 00:07:40.060 14:54:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.060 14:54:03 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.060 14:54:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.060 14:54:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.060 14:54:03 -- common/autotest_common.sh@887 -- # return 0 00:07:40.060 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.060 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.060 14:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:40.318 /dev/nbd13 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:40.318 14:54:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:40.318 14:54:03 -- common/autotest_common.sh@867 -- # local i 00:07:40.318 14:54:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.318 14:54:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.318 14:54:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:40.318 14:54:03 -- common/autotest_common.sh@871 -- # break 00:07:40.318 14:54:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.318 14:54:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.318 14:54:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.318 1+0 records in 00:07:40.318 1+0 records out 00:07:40.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675719 s, 6.1 MB/s 00:07:40.318 14:54:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.318 14:54:03 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.318 14:54:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.318 14:54:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.318 14:54:03 -- common/autotest_common.sh@887 -- # return 0 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.318 14:54:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd0", 00:07:40.576 "bdev_name": "Nvme0n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd1", 00:07:40.576 "bdev_name": "Nvme1n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd10", 00:07:40.576 "bdev_name": "Nvme2n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd11", 00:07:40.576 "bdev_name": "Nvme2n2" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd12", 00:07:40.576 "bdev_name": "Nvme2n3" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd13", 00:07:40.576 "bdev_name": "Nvme3n1" 00:07:40.576 } 00:07:40.576 ]' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd0", 00:07:40.576 "bdev_name": "Nvme0n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd1", 00:07:40.576 "bdev_name": "Nvme1n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd10", 00:07:40.576 "bdev_name": "Nvme2n1" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd11", 00:07:40.576 "bdev_name": "Nvme2n2" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd12", 00:07:40.576 "bdev_name": "Nvme2n3" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "nbd_device": "/dev/nbd13", 00:07:40.576 "bdev_name": "Nvme3n1" 00:07:40.576 } 00:07:40.576 ]' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.576 /dev/nbd1 00:07:40.576 /dev/nbd10 00:07:40.576 /dev/nbd11 00:07:40.576 /dev/nbd12 00:07:40.576 /dev/nbd13' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.576 /dev/nbd1 00:07:40.576 /dev/nbd10 00:07:40.576 /dev/nbd11 00:07:40.576 /dev/nbd12 00:07:40.576 /dev/nbd13' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@65 -- # count=6 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@95 -- # count=6 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:40.576 256+0 records in 00:07:40.576 256+0 records out 00:07:40.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934346 s, 112 MB/s 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.576 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.834 256+0 records in 00:07:40.834 256+0 records out 00:07:40.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0980403 s, 10.7 MB/s 00:07:40.834 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.834 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.834 256+0 records in 00:07:40.834 256+0 records out 00:07:40.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13004 s, 8.1 MB/s 00:07:40.834 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.834 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:41.092 256+0 records in 00:07:41.092 256+0 records out 00:07:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108155 s, 9.7 MB/s 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:41.092 256+0 records in 00:07:41.092 256+0 records out 00:07:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119824 s, 8.8 MB/s 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:41.092 256+0 records in 00:07:41.092 256+0 records out 00:07:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113233 s, 9.3 MB/s 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.092 14:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:41.351 256+0 records in 00:07:41.351 256+0 records out 00:07:41.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122056 s, 8.6 MB/s 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@51 -- # local i 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.351 14:54:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@41 -- # break 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.609 14:54:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@41 -- # break 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.867 14:54:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@41 -- # break 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@41 -- # break 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.125 14:54:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@41 -- # break 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.409 14:54:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@41 -- # break 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.692 14:54:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@65 -- # true 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@104 -- # count=0 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@109 -- # return 0 00:07:42.950 14:54:06 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:42.950 malloc_lvol_verify 00:07:42.950 14:54:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:43.208 d0cb0891-c876-4a41-93c4-0feab9df7030 00:07:43.208 14:54:06 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:43.466 fe3a3682-c394-4462-a776-4b25a8ab84cf 00:07:43.466 14:54:06 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:43.723 /dev/nbd0 00:07:43.723 14:54:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:43.723 mke2fs 1.47.0 (5-Feb-2023) 00:07:43.723 Discarding device blocks: 0/4096 done 00:07:43.723 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:43.724 00:07:43.724 Allocating group tables: 0/1 done 00:07:43.724 Writing inode tables: 0/1 done 00:07:43.724 Creating journal (1024 blocks): done 00:07:43.724 Writing superblocks and filesystem accounting information: 0/1 done 00:07:43.724 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@51 -- # local i 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.724 14:54:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@41 -- # break 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:43.982 14:54:07 -- bdev/nbd_common.sh@147 -- # return 0 00:07:43.982 14:54:07 -- bdev/blockdev.sh@324 -- # killprocess 72111 00:07:43.982 14:54:07 -- common/autotest_common.sh@936 -- # '[' -z 72111 ']' 00:07:43.982 14:54:07 -- common/autotest_common.sh@940 -- # kill -0 72111 00:07:43.982 14:54:07 -- common/autotest_common.sh@941 -- # uname 00:07:43.982 14:54:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:43.982 14:54:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72111 00:07:43.982 14:54:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:43.982 14:54:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:43.982 killing process with pid 72111 00:07:43.982 14:54:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72111' 00:07:43.982 14:54:07 -- common/autotest_common.sh@955 -- # kill 72111 00:07:43.982 14:54:07 -- common/autotest_common.sh@960 -- # wait 72111 00:07:44.240 14:54:07 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:44.240 00:07:44.240 real 0m9.177s 00:07:44.240 user 0m12.950s 00:07:44.240 sys 0m3.319s 00:07:44.240 14:54:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.240 14:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:44.240 ************************************ 00:07:44.240 END TEST bdev_nbd 00:07:44.240 ************************************ 00:07:44.240 14:54:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:44.240 14:54:07 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:44.240 skipping fio tests on NVMe due to multi-ns failures. 00:07:44.240 14:54:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:44.240 14:54:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:44.240 14:54:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:44.240 14:54:07 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:44.240 14:54:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.240 14:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:44.240 ************************************ 00:07:44.240 START TEST bdev_verify 00:07:44.240 ************************************ 00:07:44.240 14:54:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:44.240 [2024-11-18 14:54:07.685219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.240 [2024-11-18 14:54:07.685348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72480 ] 00:07:44.498 [2024-11-18 14:54:07.832675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.498 [2024-11-18 14:54:07.874428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.498 [2024-11-18 14:54:07.874494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.756 Running I/O for 5 seconds... 00:07:50.018 00:07:50.018 Latency(us) 00:07:50.018 [2024-11-18T14:54:13.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x0 length 0xbd0bd 00:07:50.018 Nvme0n1 : 5.04 2945.66 11.51 0.00 0.00 43358.77 4285.05 53235.40 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:50.018 Nvme0n1 : 5.04 2709.42 10.58 0.00 0.00 47092.53 8368.44 57671.68 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x0 length 0xa0000 00:07:50.018 Nvme1n1 : 5.04 2944.91 11.50 0.00 0.00 43327.87 4915.20 49807.36 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0xa0000 length 0xa0000 00:07:50.018 Nvme1n1 : 5.04 2707.28 10.58 0.00 0.00 47071.74 10687.41 54848.59 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x0 length 0x80000 00:07:50.018 Nvme2n1 : 5.04 2943.60 11.50 0.00 0.00 43273.90 6503.19 44161.18 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x80000 length 0x80000 00:07:50.018 Nvme2n1 : 5.05 2714.26 10.60 0.00 0.00 46944.56 2659.25 52832.10 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x0 length 0x80000 00:07:50.018 Nvme2n2 : 5.05 2948.17 11.52 0.00 0.00 43148.38 2041.70 43959.53 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x80000 length 0x80000 00:07:50.018 Nvme2n2 : 5.05 2712.03 10.59 0.00 0.00 46883.34 5973.86 52428.80 00:07:50.018 [2024-11-18T14:54:13.608Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.018 Verification LBA range: start 0x0 length 0x80000 00:07:50.019 Nvme2n3 : 5.05 2947.30 11.51 0.00 0.00 43122.67 2860.90 43556.23 00:07:50.019 [2024-11-18T14:54:13.609Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x80000 length 0x80000 00:07:50.019 Nvme2n3 : 5.06 2717.55 10.62 0.00 0.00 46755.39 2293.76 51420.55 00:07:50.019 [2024-11-18T14:54:13.609Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x0 length 0x20000 00:07:50.019 Nvme3n1 : 5.05 2945.87 11.51 0.00 0.00 43093.17 4511.90 44362.83 00:07:50.019 [2024-11-18T14:54:13.609Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:50.019 Verification LBA range: start 0x20000 length 0x20000 00:07:50.019 Nvme3n1 : 5.06 2716.85 10.61 0.00 0.00 46716.57 2999.53 51017.26 00:07:50.019 [2024-11-18T14:54:13.609Z] =================================================================================================================== 00:07:50.019 [2024-11-18T14:54:13.609Z] Total : 33952.90 132.63 0.00 0.00 44990.63 2041.70 57671.68 00:08:16.584 00:08:16.584 real 0m31.102s 00:08:16.584 user 1m1.192s 00:08:16.584 sys 0m0.358s 00:08:16.584 14:54:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.584 14:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:16.584 ************************************ 00:08:16.584 END TEST bdev_verify 00:08:16.584 ************************************ 00:08:16.584 14:54:38 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.584 14:54:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:16.584 14:54:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.584 14:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:16.584 ************************************ 00:08:16.584 START TEST bdev_verify_big_io 00:08:16.584 ************************************ 00:08:16.584 14:54:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.584 [2024-11-18 14:54:38.846828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.584 [2024-11-18 14:54:38.846937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72803 ] 00:08:16.584 [2024-11-18 14:54:38.995801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.584 [2024-11-18 14:54:39.037290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.584 [2024-11-18 14:54:39.037377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.584 Running I/O for 5 seconds... 00:08:21.931 00:08:21.931 Latency(us) 00:08:21.931 [2024-11-18T14:54:45.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0xbd0b 00:08:21.931 Nvme0n1 : 5.32 268.87 16.80 0.00 0.00 464516.07 60898.07 713031.68 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:21.931 Nvme0n1 : 5.45 193.32 12.08 0.00 0.00 645960.81 45169.43 1058255.16 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0xa000 00:08:21.931 Nvme1n1 : 5.35 276.11 17.26 0.00 0.00 449112.74 31457.28 654956.70 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0xa000 length 0xa000 00:08:21.931 Nvme1n1 : 5.47 199.13 12.45 0.00 0.00 615279.30 19559.98 948557.98 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0x8000 00:08:21.931 Nvme2n1 : 5.35 276.03 17.25 0.00 0.00 442553.14 31860.58 593655.34 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x8000 length 0x8000 00:08:21.931 Nvme2n1 : 5.47 199.06 12.44 0.00 0.00 600195.23 20164.92 845313.58 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0x8000 00:08:21.931 Nvme2n2 : 5.37 283.58 17.72 0.00 0.00 426155.25 17543.48 538806.74 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x8000 length 0x8000 00:08:21.931 Nvme2n2 : 5.49 205.73 12.86 0.00 0.00 567762.57 14619.57 735616.39 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0x8000 00:08:21.931 Nvme2n3 : 5.40 290.10 18.13 0.00 0.00 410327.26 25206.15 477505.38 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x8000 length 0x8000 00:08:21.931 Nvme2n3 : 5.56 242.71 15.17 0.00 0.00 471932.43 6856.07 642051.15 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x0 length 0x2000 00:08:21.931 Nvme3n1 : 5.43 314.34 19.65 0.00 0.00 374673.20 1260.31 438788.73 00:08:21.931 [2024-11-18T14:54:45.521Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:21.931 Verification LBA range: start 0x2000 length 0x2000 00:08:21.931 Nvme3n1 : 5.65 338.06 21.13 0.00 0.00 332404.58 469.46 606560.89 00:08:21.931 [2024-11-18T14:54:45.521Z] =================================================================================================================== 00:08:21.931 [2024-11-18T14:54:45.521Z] Total : 3087.04 192.94 0.00 0.00 466027.87 469.46 1058255.16 00:08:22.864 00:08:22.864 real 0m7.448s 00:08:22.864 user 0m14.130s 00:08:22.864 sys 0m0.235s 00:08:22.864 14:54:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.864 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.864 ************************************ 00:08:22.864 END TEST bdev_verify_big_io 00:08:22.864 ************************************ 00:08:22.864 14:54:46 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.864 14:54:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:22.864 14:54:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.864 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.864 ************************************ 00:08:22.864 START TEST bdev_write_zeroes 00:08:22.864 ************************************ 00:08:22.864 14:54:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.864 [2024-11-18 14:54:46.344113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.864 [2024-11-18 14:54:46.344238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72901 ] 00:08:23.122 [2024-11-18 14:54:46.494105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.122 [2024-11-18 14:54:46.535897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.379 Running I/O for 1 seconds... 00:08:24.752 00:08:24.752 Latency(us) 00:08:24.752 [2024-11-18T14:54:48.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme0n1 : 1.01 12018.86 46.95 0.00 0.00 10622.99 8166.79 20769.87 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme1n1 : 1.01 12004.65 46.89 0.00 0.00 10620.35 8570.09 21072.34 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme2n1 : 1.01 11991.05 46.84 0.00 0.00 10600.35 8570.09 21072.34 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme2n2 : 1.02 11977.49 46.79 0.00 0.00 10595.41 8620.50 20164.92 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme2n3 : 1.02 12018.21 46.95 0.00 0.00 10544.91 6503.19 19761.62 00:08:24.752 [2024-11-18T14:54:48.342Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:24.752 Nvme3n1 : 1.02 12004.64 46.89 0.00 0.00 10530.51 6856.07 20164.92 00:08:24.752 [2024-11-18T14:54:48.342Z] =================================================================================================================== 00:08:24.752 [2024-11-18T14:54:48.342Z] Total : 72014.89 281.31 0.00 0.00 10585.67 6503.19 21072.34 00:08:24.752 00:08:24.752 real 0m1.869s 00:08:24.752 user 0m1.567s 00:08:24.752 sys 0m0.193s 00:08:24.752 14:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.752 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.752 ************************************ 00:08:24.752 END TEST bdev_write_zeroes 00:08:24.752 ************************************ 00:08:24.752 14:54:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.752 14:54:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:24.752 14:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.752 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:24.752 ************************************ 00:08:24.752 START TEST bdev_json_nonenclosed 00:08:24.752 ************************************ 00:08:24.752 14:54:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.752 [2024-11-18 14:54:48.265033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.752 [2024-11-18 14:54:48.265176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:08:25.013 [2024-11-18 14:54:48.413844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.013 [2024-11-18 14:54:48.469171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.013 [2024-11-18 14:54:48.469481] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:25.013 [2024-11-18 14:54:48.469524] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.274 00:08:25.274 real 0m0.412s 00:08:25.274 user 0m0.191s 00:08:25.274 sys 0m0.117s 00:08:25.274 14:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.274 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:25.274 ************************************ 00:08:25.274 END TEST bdev_json_nonenclosed 00:08:25.274 ************************************ 00:08:25.274 14:54:48 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.274 14:54:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:25.274 14:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.274 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:25.274 ************************************ 00:08:25.274 START TEST bdev_json_nonarray 00:08:25.274 ************************************ 00:08:25.274 14:54:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.274 [2024-11-18 14:54:48.767263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.274 [2024-11-18 14:54:48.767459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72963 ] 00:08:25.534 [2024-11-18 14:54:48.920617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.534 [2024-11-18 14:54:48.962599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.534 [2024-11-18 14:54:48.962819] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:25.534 [2024-11-18 14:54:48.962840] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.534 00:08:25.534 real 0m0.368s 00:08:25.534 user 0m0.138s 00:08:25.534 sys 0m0.126s 00:08:25.534 14:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.534 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.534 ************************************ 00:08:25.534 END TEST bdev_json_nonarray 00:08:25.534 ************************************ 00:08:25.534 14:54:49 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:25.534 14:54:49 -- bdev/blockdev.sh@809 -- # cleanup 00:08:25.534 14:54:49 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:25.534 14:54:49 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.534 14:54:49 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:25.534 14:54:49 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:25.534 00:08:25.534 real 0m54.872s 00:08:25.534 user 1m36.064s 00:08:25.534 sys 0m5.562s 00:08:25.534 14:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.534 ************************************ 00:08:25.534 END TEST blockdev_nvme 00:08:25.534 ************************************ 00:08:25.534 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.794 14:54:49 -- spdk/autotest.sh@206 -- # uname -s 00:08:25.794 14:54:49 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:25.794 14:54:49 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.794 14:54:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.794 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.794 ************************************ 00:08:25.794 START TEST blockdev_nvme_gpt 00:08:25.794 ************************************ 00:08:25.794 14:54:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:25.794 * Looking for test storage... 00:08:25.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:25.794 14:54:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:25.794 14:54:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:25.794 14:54:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:25.794 14:54:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:25.794 14:54:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:25.794 14:54:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:25.794 14:54:49 -- scripts/common.sh@335 -- # IFS=.-: 00:08:25.794 14:54:49 -- scripts/common.sh@335 -- # read -ra ver1 00:08:25.794 14:54:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.794 14:54:49 -- scripts/common.sh@336 -- # read -ra ver2 00:08:25.794 14:54:49 -- scripts/common.sh@337 -- # local 'op=<' 00:08:25.794 14:54:49 -- scripts/common.sh@339 -- # ver1_l=2 00:08:25.794 14:54:49 -- scripts/common.sh@340 -- # ver2_l=1 00:08:25.794 14:54:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:25.794 14:54:49 -- scripts/common.sh@343 -- # case "$op" in 00:08:25.794 14:54:49 -- scripts/common.sh@344 -- # : 1 00:08:25.794 14:54:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:25.794 14:54:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.794 14:54:49 -- scripts/common.sh@364 -- # decimal 1 00:08:25.794 14:54:49 -- scripts/common.sh@352 -- # local d=1 00:08:25.794 14:54:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.794 14:54:49 -- scripts/common.sh@354 -- # echo 1 00:08:25.794 14:54:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:25.794 14:54:49 -- scripts/common.sh@365 -- # decimal 2 00:08:25.794 14:54:49 -- scripts/common.sh@352 -- # local d=2 00:08:25.794 14:54:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.794 14:54:49 -- scripts/common.sh@354 -- # echo 2 00:08:25.794 14:54:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:25.794 14:54:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:25.794 14:54:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:25.794 14:54:49 -- scripts/common.sh@367 -- # return 0 00:08:25.794 14:54:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:25.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.794 --rc genhtml_branch_coverage=1 00:08:25.794 --rc genhtml_function_coverage=1 00:08:25.794 --rc genhtml_legend=1 00:08:25.794 --rc geninfo_all_blocks=1 00:08:25.794 --rc geninfo_unexecuted_blocks=1 00:08:25.794 00:08:25.794 ' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:25.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.794 --rc genhtml_branch_coverage=1 00:08:25.794 --rc genhtml_function_coverage=1 00:08:25.794 --rc genhtml_legend=1 00:08:25.794 --rc geninfo_all_blocks=1 00:08:25.794 --rc geninfo_unexecuted_blocks=1 00:08:25.794 00:08:25.794 ' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:25.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.794 --rc genhtml_branch_coverage=1 00:08:25.794 --rc genhtml_function_coverage=1 00:08:25.794 --rc genhtml_legend=1 00:08:25.794 --rc geninfo_all_blocks=1 00:08:25.794 --rc geninfo_unexecuted_blocks=1 00:08:25.794 00:08:25.794 ' 00:08:25.794 14:54:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:25.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.794 --rc genhtml_branch_coverage=1 00:08:25.794 --rc genhtml_function_coverage=1 00:08:25.794 --rc genhtml_legend=1 00:08:25.794 --rc geninfo_all_blocks=1 00:08:25.794 --rc geninfo_unexecuted_blocks=1 00:08:25.794 00:08:25.794 ' 00:08:25.794 14:54:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:25.794 14:54:49 -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.794 14:54:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:25.794 14:54:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:25.794 14:54:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:25.794 14:54:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:25.794 14:54:49 -- bdev/blockdev.sh@18 -- # : 00:08:25.794 14:54:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:25.794 14:54:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:25.794 14:54:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:25.794 14:54:49 -- bdev/blockdev.sh@672 -- # uname -s 00:08:25.794 14:54:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:25.794 14:54:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:25.794 14:54:49 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:25.794 14:54:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:25.794 14:54:49 -- bdev/blockdev.sh@682 -- # dek= 00:08:25.794 14:54:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:25.794 14:54:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:25.794 14:54:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:25.794 14:54:49 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:25.794 14:54:49 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:25.794 14:54:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:25.794 14:54:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=73046 00:08:25.794 14:54:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.794 14:54:49 -- bdev/blockdev.sh@47 -- # waitforlisten 73046 00:08:25.794 14:54:49 -- common/autotest_common.sh@829 -- # '[' -z 73046 ']' 00:08:25.794 14:54:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.794 14:54:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.794 14:54:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.794 14:54:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:25.794 14:54:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.794 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:26.055 [2024-11-18 14:54:49.411120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.055 [2024-11-18 14:54:49.411259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73046 ] 00:08:26.055 [2024-11-18 14:54:49.560308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.055 [2024-11-18 14:54:49.604819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.055 [2024-11-18 14:54:49.605036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.996 14:54:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.996 14:54:50 -- common/autotest_common.sh@862 -- # return 0 00:08:26.996 14:54:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:26.996 14:54:50 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:26.996 14:54:50 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:27.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.257 Waiting for block devices as requested 00:08:27.257 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.516 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.516 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.516 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.939 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:32.939 14:54:56 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:32.939 14:54:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:32.939 14:54:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:32.939 14:54:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:32.939 14:54:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:32.939 14:54:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:32.939 14:54:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:32.939 14:54:56 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:32.939 14:54:56 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:32.939 14:54:56 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:32.939 14:54:56 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:32.939 14:54:56 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:32.939 14:54:56 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:32.939 14:54:56 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:32.939 14:54:56 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:32.939 BYT; 00:08:32.939 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:32.939 14:54:56 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:32.939 BYT; 00:08:32.939 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:32.939 14:54:56 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:32.939 14:54:56 -- bdev/blockdev.sh@114 -- # break 00:08:32.939 14:54:56 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:32.939 14:54:56 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:32.939 14:54:56 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:32.939 14:54:56 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:32.939 14:54:56 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:32.939 14:54:56 -- scripts/common.sh@410 -- # local spdk_guid 00:08:32.939 14:54:56 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:32.940 14:54:56 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.940 14:54:56 -- scripts/common.sh@415 -- # IFS='()' 00:08:32.940 14:54:56 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:32.940 14:54:56 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.940 14:54:56 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:32.940 14:54:56 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.940 14:54:56 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.940 14:54:56 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:32.940 14:54:56 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:32.940 14:54:56 -- scripts/common.sh@422 -- # local spdk_guid 00:08:32.940 14:54:56 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:32.940 14:54:56 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.940 14:54:56 -- scripts/common.sh@427 -- # IFS='()' 00:08:32.940 14:54:56 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:32.940 14:54:56 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:32.940 14:54:56 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:32.940 14:54:56 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.940 14:54:56 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.940 14:54:56 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:32.940 14:54:56 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:33.885 The operation has completed successfully. 00:08:33.885 14:54:57 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:34.830 The operation has completed successfully. 00:08:34.830 14:54:58 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.772 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.772 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.772 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.772 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.773 14:54:59 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:35.773 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.773 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:35.773 [] 00:08:35.773 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.773 14:54:59 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:35.773 14:54:59 -- bdev/blockdev.sh@79 -- # local json 00:08:35.773 14:54:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:35.773 14:54:59 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:35.773 14:54:59 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:35.773 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.773 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.033 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.033 14:54:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:36.033 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.033 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.033 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.033 14:54:59 -- bdev/blockdev.sh@738 -- # cat 00:08:36.033 14:54:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:36.033 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.033 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.033 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.033 14:54:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:36.033 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.033 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.294 14:54:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:36.294 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.294 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.294 14:54:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:36.294 14:54:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:36.294 14:54:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.294 14:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 14:54:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:36.294 14:54:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.294 14:54:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:36.294 14:54:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:36.295 14:54:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "932d7680-9666-4ea6-88fd-2942a1149d36"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "932d7680-9666-4ea6-88fd-2942a1149d36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6c92646f-7f66-4aa0-afc2-87153d99ce18"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c92646f-7f66-4aa0-afc2-87153d99ce18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d701a645-0605-4f43-b42a-332887237cc0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d701a645-0605-4f43-b42a-332887237cc0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c087c4e3-678e-48a5-90fe-defa86401ef4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c087c4e3-678e-48a5-90fe-defa86401ef4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8f6a0c5e-2760-4b99-bd22-9ba4d6f5e279"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8f6a0c5e-2760-4b99-bd22-9ba4d6f5e279",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:36.295 14:54:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:36.295 14:54:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:36.295 14:54:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:36.295 14:54:59 -- bdev/blockdev.sh@752 -- # killprocess 73046 00:08:36.295 14:54:59 -- common/autotest_common.sh@936 -- # '[' -z 73046 ']' 00:08:36.295 14:54:59 -- common/autotest_common.sh@940 -- # kill -0 73046 00:08:36.295 14:54:59 -- common/autotest_common.sh@941 -- # uname 00:08:36.295 14:54:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.295 14:54:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73046 00:08:36.295 14:54:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.295 killing process with pid 73046 00:08:36.295 14:54:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.295 14:54:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73046' 00:08:36.295 14:54:59 -- common/autotest_common.sh@955 -- # kill 73046 00:08:36.295 14:54:59 -- common/autotest_common.sh@960 -- # wait 73046 00:08:36.556 14:55:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:36.556 14:55:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:36.556 14:55:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:36.556 14:55:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.556 14:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:36.556 ************************************ 00:08:36.556 START TEST bdev_hello_world 00:08:36.556 ************************************ 00:08:36.556 14:55:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:36.556 [2024-11-18 14:55:00.074731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.556 [2024-11-18 14:55:00.074841] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73674 ] 00:08:36.817 [2024-11-18 14:55:00.214850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.817 [2024-11-18 14:55:00.244661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.078 [2024-11-18 14:55:00.593614] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:37.078 [2024-11-18 14:55:00.593660] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:37.078 [2024-11-18 14:55:00.593677] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:37.078 [2024-11-18 14:55:00.595687] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:37.078 [2024-11-18 14:55:00.596270] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:37.078 [2024-11-18 14:55:00.596300] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:37.078 [2024-11-18 14:55:00.596472] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:37.078 00:08:37.078 [2024-11-18 14:55:00.596501] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:37.340 00:08:37.340 real 0m0.729s 00:08:37.340 user 0m0.488s 00:08:37.340 sys 0m0.139s 00:08:37.340 14:55:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.340 ************************************ 00:08:37.340 END TEST bdev_hello_world 00:08:37.340 ************************************ 00:08:37.340 14:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:37.340 14:55:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:37.340 14:55:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:37.340 14:55:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.340 14:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:37.340 ************************************ 00:08:37.340 START TEST bdev_bounds 00:08:37.340 ************************************ 00:08:37.340 14:55:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:37.340 14:55:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=73705 00:08:37.340 14:55:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:37.340 Process bdevio pid: 73705 00:08:37.340 14:55:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 73705' 00:08:37.340 14:55:00 -- bdev/blockdev.sh@291 -- # waitforlisten 73705 00:08:37.340 14:55:00 -- common/autotest_common.sh@829 -- # '[' -z 73705 ']' 00:08:37.340 14:55:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.340 14:55:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.340 14:55:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.340 14:55:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.340 14:55:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:37.340 14:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:37.340 [2024-11-18 14:55:00.847577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.340 [2024-11-18 14:55:00.847684] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73705 ] 00:08:37.601 [2024-11-18 14:55:01.004054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.601 [2024-11-18 14:55:01.046739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.601 [2024-11-18 14:55:01.047072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.601 [2024-11-18 14:55:01.047144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.172 14:55:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.172 14:55:01 -- common/autotest_common.sh@862 -- # return 0 00:08:38.172 14:55:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:38.435 I/O targets: 00:08:38.435 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:38.435 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:38.435 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:38.435 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.435 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.435 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:38.435 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:38.435 00:08:38.435 00:08:38.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.435 http://cunit.sourceforge.net/ 00:08:38.435 00:08:38.435 00:08:38.435 Suite: bdevio tests on: Nvme3n1 00:08:38.435 Test: blockdev write read block ...passed 00:08:38.435 Test: blockdev write zeroes read block ...passed 00:08:38.435 Test: blockdev write zeroes read no split ...passed 00:08:38.435 Test: blockdev write zeroes read split ...passed 00:08:38.435 Test: blockdev write zeroes read split partial ...passed 00:08:38.435 Test: blockdev reset ...[2024-11-18 14:55:01.779120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:38.435 [2024-11-18 14:55:01.780841] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.435 passed 00:08:38.435 Test: blockdev write read 8 blocks ...passed 00:08:38.435 Test: blockdev write read size > 128k ...passed 00:08:38.435 Test: blockdev write read invalid size ...passed 00:08:38.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.435 Test: blockdev write read max offset ...passed 00:08:38.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.435 Test: blockdev writev readv 8 blocks ...passed 00:08:38.435 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.435 Test: blockdev writev readv block ...passed 00:08:38.435 Test: blockdev writev readv size > 128k ...passed 00:08:38.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.435 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.785857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd604000 len:0x1000 00:08:38.435 [2024-11-18 14:55:01.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme passthru rw ...passed 00:08:38.435 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:55:01.786988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.435 [2024-11-18 14:55:01.787025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme admin passthru ...passed 00:08:38.435 Test: blockdev copy ...passed 00:08:38.435 Suite: bdevio tests on: Nvme2n3 00:08:38.435 Test: blockdev write read block ...passed 00:08:38.435 Test: blockdev write zeroes read block ...passed 00:08:38.435 Test: blockdev write zeroes read no split ...passed 00:08:38.435 Test: blockdev write zeroes read split ...passed 00:08:38.435 Test: blockdev write zeroes read split partial ...passed 00:08:38.435 Test: blockdev reset ...[2024-11-18 14:55:01.807917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.435 [2024-11-18 14:55:01.810822] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.435 passed 00:08:38.435 Test: blockdev write read 8 blocks ...passed 00:08:38.435 Test: blockdev write read size > 128k ...passed 00:08:38.435 Test: blockdev write read invalid size ...passed 00:08:38.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.435 Test: blockdev write read max offset ...passed 00:08:38.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.435 Test: blockdev writev readv 8 blocks ...passed 00:08:38.435 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.435 Test: blockdev writev readv block ...passed 00:08:38.435 Test: blockdev writev readv size > 128k ...passed 00:08:38.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.435 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.816636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd604000 len:0x1000 00:08:38.435 [2024-11-18 14:55:01.816680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme passthru rw ...passed 00:08:38.435 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:55:01.817426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.435 [2024-11-18 14:55:01.817452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme admin passthru ...passed 00:08:38.435 Test: blockdev copy ...passed 00:08:38.435 Suite: bdevio tests on: Nvme2n2 00:08:38.435 Test: blockdev write read block ...passed 00:08:38.435 Test: blockdev write zeroes read block ...passed 00:08:38.435 Test: blockdev write zeroes read no split ...passed 00:08:38.435 Test: blockdev write zeroes read split ...passed 00:08:38.435 Test: blockdev write zeroes read split partial ...passed 00:08:38.435 Test: blockdev reset ...[2024-11-18 14:55:01.833275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.435 [2024-11-18 14:55:01.835048] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.435 passed 00:08:38.435 Test: blockdev write read 8 blocks ...passed 00:08:38.435 Test: blockdev write read size > 128k ...passed 00:08:38.435 Test: blockdev write read invalid size ...passed 00:08:38.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.435 Test: blockdev write read max offset ...passed 00:08:38.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.435 Test: blockdev writev readv 8 blocks ...passed 00:08:38.435 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.435 Test: blockdev writev readv block ...passed 00:08:38.435 Test: blockdev writev readv size > 128k ...passed 00:08:38.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.435 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.840955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0822000 len:0x1000 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme passthru rw ...[2024-11-18 14:55:01.840997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:55:01.841625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.435 [2024-11-18 14:55:01.841648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme admin passthru ...passed 00:08:38.435 Test: blockdev copy ...passed 00:08:38.435 Suite: bdevio tests on: Nvme2n1 00:08:38.435 Test: blockdev write read block ...passed 00:08:38.435 Test: blockdev write zeroes read block ...passed 00:08:38.435 Test: blockdev write zeroes read no split ...passed 00:08:38.435 Test: blockdev write zeroes read split ...passed 00:08:38.435 Test: blockdev write zeroes read split partial ...passed 00:08:38.435 Test: blockdev reset ...[2024-11-18 14:55:01.859882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:38.435 [2024-11-18 14:55:01.861605] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.435 passed 00:08:38.435 Test: blockdev write read 8 blocks ...passed 00:08:38.435 Test: blockdev write read size > 128k ...passed 00:08:38.435 Test: blockdev write read invalid size ...passed 00:08:38.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.435 Test: blockdev write read max offset ...passed 00:08:38.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.435 Test: blockdev writev readv 8 blocks ...passed 00:08:38.435 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.435 Test: blockdev writev readv block ...passed 00:08:38.435 Test: blockdev writev readv size > 128k ...passed 00:08:38.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.435 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.868116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd60d000 len:0x1000 00:08:38.435 [2024-11-18 14:55:01.868158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev nvme passthru rw ...passed 00:08:38.435 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.435 Test: blockdev nvme admin passthru ...[2024-11-18 14:55:01.868725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.435 [2024-11-18 14:55:01.868749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.435 passed 00:08:38.435 Test: blockdev copy ...passed 00:08:38.435 Suite: bdevio tests on: Nvme1n1 00:08:38.435 Test: blockdev write read block ...passed 00:08:38.435 Test: blockdev write zeroes read block ...passed 00:08:38.436 Test: blockdev write zeroes read no split ...passed 00:08:38.436 Test: blockdev write zeroes read split ...passed 00:08:38.436 Test: blockdev write zeroes read split partial ...passed 00:08:38.436 Test: blockdev reset ...[2024-11-18 14:55:01.883201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:38.436 [2024-11-18 14:55:01.885786] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.436 passed 00:08:38.436 Test: blockdev write read 8 blocks ...passed 00:08:38.436 Test: blockdev write read size > 128k ...passed 00:08:38.436 Test: blockdev write read invalid size ...passed 00:08:38.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.436 Test: blockdev write read max offset ...passed 00:08:38.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.436 Test: blockdev writev readv 8 blocks ...passed 00:08:38.436 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.436 Test: blockdev writev readv block ...passed 00:08:38.436 Test: blockdev writev readv size > 128k ...passed 00:08:38.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.436 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.893632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd232000 len:0x1000 00:08:38.436 [2024-11-18 14:55:01.893680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:38.436 passed 00:08:38.436 Test: blockdev nvme passthru rw ...passed 00:08:38.436 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:55:01.894824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:38.436 [2024-11-18 14:55:01.894857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:38.436 passed 00:08:38.436 Test: blockdev nvme admin passthru ...passed 00:08:38.436 Test: blockdev copy ...passed 00:08:38.436 Suite: bdevio tests on: Nvme0n1p2 00:08:38.436 Test: blockdev write read block ...passed 00:08:38.436 Test: blockdev write zeroes read block ...passed 00:08:38.436 Test: blockdev write zeroes read no split ...passed 00:08:38.436 Test: blockdev write zeroes read split ...passed 00:08:38.436 Test: blockdev write zeroes read split partial ...passed 00:08:38.436 Test: blockdev reset ...[2024-11-18 14:55:01.923936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:38.436 [2024-11-18 14:55:01.926182] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.436 passed 00:08:38.436 Test: blockdev write read 8 blocks ...passed 00:08:38.436 Test: blockdev write read size > 128k ...passed 00:08:38.436 Test: blockdev write read invalid size ...passed 00:08:38.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.436 Test: blockdev write read max offset ...passed 00:08:38.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.436 Test: blockdev writev readv 8 blocks ...passed 00:08:38.436 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.436 Test: blockdev writev readv block ...passed 00:08:38.436 Test: blockdev writev readv size > 128k ...passed 00:08:38.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.436 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.931801] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:38.436 separate metadata which is not supported yet. 00:08:38.436 passed 00:08:38.436 Test: blockdev nvme passthru rw ...passed 00:08:38.436 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.436 Test: blockdev nvme admin passthru ...passed 00:08:38.436 Test: blockdev copy ...passed 00:08:38.436 Suite: bdevio tests on: Nvme0n1p1 00:08:38.436 Test: blockdev write read block ...passed 00:08:38.436 Test: blockdev write zeroes read block ...passed 00:08:38.436 Test: blockdev write zeroes read no split ...passed 00:08:38.436 Test: blockdev write zeroes read split ...passed 00:08:38.436 Test: blockdev write zeroes read split partial ...passed 00:08:38.436 Test: blockdev reset ...[2024-11-18 14:55:01.944430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:38.436 [2024-11-18 14:55:01.945921] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:38.436 passed 00:08:38.436 Test: blockdev write read 8 blocks ...passed 00:08:38.436 Test: blockdev write read size > 128k ...passed 00:08:38.436 Test: blockdev write read invalid size ...passed 00:08:38.436 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:38.436 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:38.436 Test: blockdev write read max offset ...passed 00:08:38.436 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:38.436 Test: blockdev writev readv 8 blocks ...passed 00:08:38.436 Test: blockdev writev readv 30 x 1block ...passed 00:08:38.436 Test: blockdev writev readv block ...passed 00:08:38.436 Test: blockdev writev readv size > 128k ...passed 00:08:38.436 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:38.436 Test: blockdev comparev and writev ...[2024-11-18 14:55:01.950333] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:38.436 separate metadata which is not supported yet. 00:08:38.436 passed 00:08:38.436 Test: blockdev nvme passthru rw ...passed 00:08:38.436 Test: blockdev nvme passthru vendor specific ...passed 00:08:38.436 Test: blockdev nvme admin passthru ...passed 00:08:38.436 Test: blockdev copy ...passed 00:08:38.436 00:08:38.436 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.436 suites 7 7 n/a 0 0 00:08:38.436 tests 161 161 161 0 0 00:08:38.436 asserts 1006 1006 1006 0 n/a 00:08:38.436 00:08:38.436 Elapsed time = 0.425 seconds 00:08:38.436 0 00:08:38.436 14:55:01 -- bdev/blockdev.sh@293 -- # killprocess 73705 00:08:38.436 14:55:01 -- common/autotest_common.sh@936 -- # '[' -z 73705 ']' 00:08:38.436 14:55:01 -- common/autotest_common.sh@940 -- # kill -0 73705 00:08:38.436 14:55:01 -- common/autotest_common.sh@941 -- # uname 00:08:38.436 14:55:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:38.436 14:55:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73705 00:08:38.436 14:55:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:38.436 killing process with pid 73705 00:08:38.436 14:55:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:38.436 14:55:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73705' 00:08:38.436 14:55:01 -- common/autotest_common.sh@955 -- # kill 73705 00:08:38.436 14:55:01 -- common/autotest_common.sh@960 -- # wait 73705 00:08:38.698 ************************************ 00:08:38.698 END TEST bdev_bounds 00:08:38.698 ************************************ 00:08:38.698 14:55:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:38.698 00:08:38.698 real 0m1.335s 00:08:38.698 user 0m3.303s 00:08:38.698 sys 0m0.291s 00:08:38.698 14:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.698 14:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.698 14:55:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:38.698 14:55:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:38.698 14:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.698 14:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.698 ************************************ 00:08:38.698 START TEST bdev_nbd 00:08:38.698 ************************************ 00:08:38.698 14:55:02 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:38.698 14:55:02 -- bdev/blockdev.sh@298 -- # uname -s 00:08:38.698 14:55:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:38.698 14:55:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.698 14:55:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.698 14:55:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:38.698 14:55:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:38.698 14:55:02 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:38.698 14:55:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:38.698 14:55:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:38.698 14:55:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:38.698 14:55:02 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:38.698 14:55:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:38.698 14:55:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:38.698 14:55:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:38.698 14:55:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:38.698 14:55:02 -- bdev/blockdev.sh@316 -- # nbd_pid=73754 00:08:38.698 14:55:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:38.698 14:55:02 -- bdev/blockdev.sh@318 -- # waitforlisten 73754 /var/tmp/spdk-nbd.sock 00:08:38.698 14:55:02 -- common/autotest_common.sh@829 -- # '[' -z 73754 ']' 00:08:38.698 14:55:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:38.698 14:55:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:38.698 14:55:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:38.698 14:55:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.698 14:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.698 14:55:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:38.698 [2024-11-18 14:55:02.238401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.698 [2024-11-18 14:55:02.238522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.959 [2024-11-18 14:55:02.387730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.959 [2024-11-18 14:55:02.418065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.903 14:55:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.903 14:55:03 -- common/autotest_common.sh@862 -- # return 0 00:08:39.903 14:55:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@24 -- # local i 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:39.903 14:55:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:39.903 14:55:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:39.903 14:55:03 -- common/autotest_common.sh@867 -- # local i 00:08:39.903 14:55:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:39.903 14:55:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:39.903 14:55:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:39.903 14:55:03 -- common/autotest_common.sh@871 -- # break 00:08:39.903 14:55:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:39.903 14:55:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:39.903 14:55:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:39.903 1+0 records in 00:08:39.903 1+0 records out 00:08:39.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105099 s, 3.9 MB/s 00:08:39.903 14:55:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.903 14:55:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:39.904 14:55:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:39.904 14:55:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:39.904 14:55:03 -- common/autotest_common.sh@887 -- # return 0 00:08:39.904 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:39.904 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:39.904 14:55:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:40.165 14:55:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:40.165 14:55:03 -- common/autotest_common.sh@867 -- # local i 00:08:40.165 14:55:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.165 14:55:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.165 14:55:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:40.165 14:55:03 -- common/autotest_common.sh@871 -- # break 00:08:40.165 14:55:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.165 14:55:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.165 14:55:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.165 1+0 records in 00:08:40.165 1+0 records out 00:08:40.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701131 s, 5.8 MB/s 00:08:40.165 14:55:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.165 14:55:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.165 14:55:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.165 14:55:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.165 14:55:03 -- common/autotest_common.sh@887 -- # return 0 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.165 14:55:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:40.426 14:55:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:40.426 14:55:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:40.426 14:55:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:40.426 14:55:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:40.426 14:55:03 -- common/autotest_common.sh@867 -- # local i 00:08:40.426 14:55:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.426 14:55:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.426 14:55:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:40.426 14:55:03 -- common/autotest_common.sh@871 -- # break 00:08:40.426 14:55:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.426 14:55:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.427 14:55:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.427 1+0 records in 00:08:40.427 1+0 records out 00:08:40.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0026321 s, 1.6 MB/s 00:08:40.427 14:55:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.427 14:55:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.427 14:55:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.427 14:55:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.427 14:55:03 -- common/autotest_common.sh@887 -- # return 0 00:08:40.427 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.427 14:55:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.427 14:55:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:40.427 14:55:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:40.427 14:55:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:40.689 14:55:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:40.689 14:55:04 -- common/autotest_common.sh@867 -- # local i 00:08:40.689 14:55:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:40.689 14:55:04 -- common/autotest_common.sh@871 -- # break 00:08:40.689 14:55:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.689 1+0 records in 00:08:40.689 1+0 records out 00:08:40.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047788 s, 8.6 MB/s 00:08:40.689 14:55:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.689 14:55:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.689 14:55:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.689 14:55:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.689 14:55:04 -- common/autotest_common.sh@887 -- # return 0 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:40.689 14:55:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:40.689 14:55:04 -- common/autotest_common.sh@867 -- # local i 00:08:40.689 14:55:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:40.689 14:55:04 -- common/autotest_common.sh@871 -- # break 00:08:40.689 14:55:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.689 14:55:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.689 1+0 records in 00:08:40.689 1+0 records out 00:08:40.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742865 s, 5.5 MB/s 00:08:40.689 14:55:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.689 14:55:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.689 14:55:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.689 14:55:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.689 14:55:04 -- common/autotest_common.sh@887 -- # return 0 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.689 14:55:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:40.950 14:55:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:40.950 14:55:04 -- common/autotest_common.sh@867 -- # local i 00:08:40.950 14:55:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:40.950 14:55:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:40.950 14:55:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:40.950 14:55:04 -- common/autotest_common.sh@871 -- # break 00:08:40.950 14:55:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:40.950 14:55:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:40.950 14:55:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:40.950 1+0 records in 00:08:40.950 1+0 records out 00:08:40.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001168 s, 3.5 MB/s 00:08:40.950 14:55:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.950 14:55:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:40.950 14:55:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:40.950 14:55:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:40.950 14:55:04 -- common/autotest_common.sh@887 -- # return 0 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:40.950 14:55:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:41.210 14:55:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:41.210 14:55:04 -- common/autotest_common.sh@867 -- # local i 00:08:41.210 14:55:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.210 14:55:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.210 14:55:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:41.210 14:55:04 -- common/autotest_common.sh@871 -- # break 00:08:41.210 14:55:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.210 14:55:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.210 14:55:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.210 1+0 records in 00:08:41.210 1+0 records out 00:08:41.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887574 s, 4.6 MB/s 00:08:41.210 14:55:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.210 14:55:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:41.210 14:55:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.210 14:55:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.210 14:55:04 -- common/autotest_common.sh@887 -- # return 0 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.210 14:55:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd0", 00:08:41.472 "bdev_name": "Nvme0n1p1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd1", 00:08:41.472 "bdev_name": "Nvme0n1p2" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd2", 00:08:41.472 "bdev_name": "Nvme1n1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd3", 00:08:41.472 "bdev_name": "Nvme2n1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd4", 00:08:41.472 "bdev_name": "Nvme2n2" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd5", 00:08:41.472 "bdev_name": "Nvme2n3" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd6", 00:08:41.472 "bdev_name": "Nvme3n1" 00:08:41.472 } 00:08:41.472 ]' 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd0", 00:08:41.472 "bdev_name": "Nvme0n1p1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd1", 00:08:41.472 "bdev_name": "Nvme0n1p2" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd2", 00:08:41.472 "bdev_name": "Nvme1n1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd3", 00:08:41.472 "bdev_name": "Nvme2n1" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd4", 00:08:41.472 "bdev_name": "Nvme2n2" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd5", 00:08:41.472 "bdev_name": "Nvme2n3" 00:08:41.472 }, 00:08:41.472 { 00:08:41.472 "nbd_device": "/dev/nbd6", 00:08:41.472 "bdev_name": "Nvme3n1" 00:08:41.472 } 00:08:41.472 ]' 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@51 -- # local i 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.472 14:55:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.733 14:55:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.993 14:55:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.253 14:55:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@41 -- # break 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.515 14:55:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:42.515 14:55:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:42.515 14:55:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:42.515 14:55:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:42.515 14:55:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.515 14:55:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.516 14:55:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:42.516 14:55:06 -- bdev/nbd_common.sh@41 -- # break 00:08:42.516 14:55:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.516 14:55:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.516 14:55:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@41 -- # break 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.777 14:55:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@65 -- # true 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@65 -- # count=0 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@122 -- # count=0 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@127 -- # return 0 00:08:43.038 14:55:06 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:43.038 14:55:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@12 -- # local i 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.039 14:55:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:43.301 /dev/nbd0 00:08:43.301 14:55:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:43.301 14:55:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:43.301 14:55:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:43.301 14:55:06 -- common/autotest_common.sh@867 -- # local i 00:08:43.301 14:55:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.301 14:55:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.301 14:55:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:43.301 14:55:06 -- common/autotest_common.sh@871 -- # break 00:08:43.301 14:55:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.301 14:55:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.301 14:55:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.301 1+0 records in 00:08:43.301 1+0 records out 00:08:43.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326549 s, 12.5 MB/s 00:08:43.301 14:55:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.301 14:55:06 -- common/autotest_common.sh@884 -- # size=4096 00:08:43.301 14:55:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.301 14:55:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.301 14:55:06 -- common/autotest_common.sh@887 -- # return 0 00:08:43.301 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.301 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.301 14:55:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:43.562 /dev/nbd1 00:08:43.562 14:55:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:43.562 14:55:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:43.562 14:55:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:43.562 14:55:06 -- common/autotest_common.sh@867 -- # local i 00:08:43.562 14:55:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.562 14:55:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.562 14:55:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:43.562 14:55:06 -- common/autotest_common.sh@871 -- # break 00:08:43.562 14:55:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.562 14:55:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.562 14:55:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.562 1+0 records in 00:08:43.562 1+0 records out 00:08:43.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803022 s, 5.1 MB/s 00:08:43.562 14:55:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.562 14:55:06 -- common/autotest_common.sh@884 -- # size=4096 00:08:43.562 14:55:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.562 14:55:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.562 14:55:06 -- common/autotest_common.sh@887 -- # return 0 00:08:43.562 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.562 14:55:06 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.562 14:55:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:43.562 /dev/nbd10 00:08:43.824 14:55:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:43.824 14:55:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:43.824 14:55:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:43.824 14:55:07 -- common/autotest_common.sh@867 -- # local i 00:08:43.824 14:55:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.824 14:55:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.824 14:55:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:43.824 14:55:07 -- common/autotest_common.sh@871 -- # break 00:08:43.824 14:55:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.824 14:55:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.824 14:55:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.824 1+0 records in 00:08:43.824 1+0 records out 00:08:43.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776795 s, 5.3 MB/s 00:08:43.824 14:55:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.824 14:55:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:43.824 14:55:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.824 14:55:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.824 14:55:07 -- common/autotest_common.sh@887 -- # return 0 00:08:43.824 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.824 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:43.825 /dev/nbd11 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:43.825 14:55:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:43.825 14:55:07 -- common/autotest_common.sh@867 -- # local i 00:08:43.825 14:55:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.825 14:55:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.825 14:55:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:43.825 14:55:07 -- common/autotest_common.sh@871 -- # break 00:08:43.825 14:55:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.825 14:55:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.825 14:55:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.825 1+0 records in 00:08:43.825 1+0 records out 00:08:43.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769748 s, 5.3 MB/s 00:08:43.825 14:55:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.825 14:55:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:43.825 14:55:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.825 14:55:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.825 14:55:07 -- common/autotest_common.sh@887 -- # return 0 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:43.825 14:55:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:44.086 /dev/nbd12 00:08:44.086 14:55:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:44.086 14:55:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:44.086 14:55:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:44.086 14:55:07 -- common/autotest_common.sh@867 -- # local i 00:08:44.086 14:55:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:44.086 14:55:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:44.086 14:55:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:44.087 14:55:07 -- common/autotest_common.sh@871 -- # break 00:08:44.087 14:55:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:44.087 14:55:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:44.087 14:55:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.087 1+0 records in 00:08:44.087 1+0 records out 00:08:44.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134827 s, 3.0 MB/s 00:08:44.087 14:55:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.087 14:55:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:44.087 14:55:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.087 14:55:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:44.087 14:55:07 -- common/autotest_common.sh@887 -- # return 0 00:08:44.087 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.087 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:44.087 14:55:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:44.348 /dev/nbd13 00:08:44.348 14:55:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:44.348 14:55:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:44.348 14:55:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:44.349 14:55:07 -- common/autotest_common.sh@867 -- # local i 00:08:44.349 14:55:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:44.349 14:55:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:44.349 14:55:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:44.349 14:55:07 -- common/autotest_common.sh@871 -- # break 00:08:44.349 14:55:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:44.349 14:55:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:44.349 14:55:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.349 1+0 records in 00:08:44.349 1+0 records out 00:08:44.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535047 s, 7.7 MB/s 00:08:44.349 14:55:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.349 14:55:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:44.349 14:55:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.349 14:55:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:44.349 14:55:07 -- common/autotest_common.sh@887 -- # return 0 00:08:44.349 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.349 14:55:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:44.349 14:55:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:44.610 /dev/nbd14 00:08:44.610 14:55:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:44.610 14:55:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:44.610 14:55:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:44.611 14:55:08 -- common/autotest_common.sh@867 -- # local i 00:08:44.611 14:55:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:44.611 14:55:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:44.611 14:55:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:44.611 14:55:08 -- common/autotest_common.sh@871 -- # break 00:08:44.611 14:55:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:44.611 14:55:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:44.611 14:55:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.611 1+0 records in 00:08:44.611 1+0 records out 00:08:44.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616232 s, 6.6 MB/s 00:08:44.611 14:55:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.611 14:55:08 -- common/autotest_common.sh@884 -- # size=4096 00:08:44.611 14:55:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.611 14:55:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:44.611 14:55:08 -- common/autotest_common.sh@887 -- # return 0 00:08:44.611 14:55:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.611 14:55:08 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:44.611 14:55:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.611 14:55:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.611 14:55:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd0", 00:08:44.872 "bdev_name": "Nvme0n1p1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd1", 00:08:44.872 "bdev_name": "Nvme0n1p2" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd10", 00:08:44.872 "bdev_name": "Nvme1n1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd11", 00:08:44.872 "bdev_name": "Nvme2n1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd12", 00:08:44.872 "bdev_name": "Nvme2n2" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd13", 00:08:44.872 "bdev_name": "Nvme2n3" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd14", 00:08:44.872 "bdev_name": "Nvme3n1" 00:08:44.872 } 00:08:44.872 ]' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd0", 00:08:44.872 "bdev_name": "Nvme0n1p1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd1", 00:08:44.872 "bdev_name": "Nvme0n1p2" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd10", 00:08:44.872 "bdev_name": "Nvme1n1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd11", 00:08:44.872 "bdev_name": "Nvme2n1" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd12", 00:08:44.872 "bdev_name": "Nvme2n2" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd13", 00:08:44.872 "bdev_name": "Nvme2n3" 00:08:44.872 }, 00:08:44.872 { 00:08:44.872 "nbd_device": "/dev/nbd14", 00:08:44.872 "bdev_name": "Nvme3n1" 00:08:44.872 } 00:08:44.872 ]' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:44.872 /dev/nbd1 00:08:44.872 /dev/nbd10 00:08:44.872 /dev/nbd11 00:08:44.872 /dev/nbd12 00:08:44.872 /dev/nbd13 00:08:44.872 /dev/nbd14' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:44.872 /dev/nbd1 00:08:44.872 /dev/nbd10 00:08:44.872 /dev/nbd11 00:08:44.872 /dev/nbd12 00:08:44.872 /dev/nbd13 00:08:44.872 /dev/nbd14' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@65 -- # count=7 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@95 -- # count=7 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:44.872 256+0 records in 00:08:44.872 256+0 records out 00:08:44.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078182 s, 134 MB/s 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:44.872 256+0 records in 00:08:44.872 256+0 records out 00:08:44.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107015 s, 9.8 MB/s 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.872 14:55:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.134 256+0 records in 00:08:45.134 256+0 records out 00:08:45.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19349 s, 5.4 MB/s 00:08:45.134 14:55:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.134 14:55:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:45.427 256+0 records in 00:08:45.427 256+0 records out 00:08:45.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195376 s, 5.4 MB/s 00:08:45.427 14:55:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.427 14:55:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:45.427 256+0 records in 00:08:45.427 256+0 records out 00:08:45.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198623 s, 5.3 MB/s 00:08:45.427 14:55:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.427 14:55:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:45.685 256+0 records in 00:08:45.685 256+0 records out 00:08:45.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175222 s, 6.0 MB/s 00:08:45.685 14:55:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.685 14:55:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:45.943 256+0 records in 00:08:45.943 256+0 records out 00:08:45.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0935648 s, 11.2 MB/s 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:45.943 256+0 records in 00:08:45.943 256+0 records out 00:08:45.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0720341 s, 14.6 MB/s 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@51 -- # local i 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.943 14:55:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@41 -- # break 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.201 14:55:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@41 -- # break 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:46.459 14:55:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@41 -- # break 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.459 14:55:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@41 -- # break 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.717 14:55:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@41 -- # break 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.976 14:55:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@41 -- # break 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@41 -- # break 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.234 14:55:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@65 -- # true 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@65 -- # count=0 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@104 -- # count=0 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@109 -- # return 0 00:08:47.493 14:55:11 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:47.493 14:55:11 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:47.751 malloc_lvol_verify 00:08:47.751 14:55:11 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:48.010 c482d86b-71ed-4593-a797-dbed841e5a01 00:08:48.010 14:55:11 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:48.269 729ba069-9a23-4f4a-9a28-ccfaff5b510b 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:48.269 /dev/nbd0 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:48.269 mke2fs 1.47.0 (5-Feb-2023) 00:08:48.269 Discarding device blocks: 0/4096 done 00:08:48.269 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:48.269 00:08:48.269 Allocating group tables: 0/1 done 00:08:48.269 Writing inode tables: 0/1 done 00:08:48.269 Creating journal (1024 blocks): done 00:08:48.269 Writing superblocks and filesystem accounting information: 0/1 done 00:08:48.269 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@51 -- # local i 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.269 14:55:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@41 -- # break 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:48.527 14:55:12 -- bdev/nbd_common.sh@147 -- # return 0 00:08:48.527 14:55:12 -- bdev/blockdev.sh@324 -- # killprocess 73754 00:08:48.527 14:55:12 -- common/autotest_common.sh@936 -- # '[' -z 73754 ']' 00:08:48.527 14:55:12 -- common/autotest_common.sh@940 -- # kill -0 73754 00:08:48.527 14:55:12 -- common/autotest_common.sh@941 -- # uname 00:08:48.527 14:55:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:48.527 14:55:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73754 00:08:48.527 14:55:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:48.527 14:55:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:48.527 killing process with pid 73754 00:08:48.527 14:55:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73754' 00:08:48.527 14:55:12 -- common/autotest_common.sh@955 -- # kill 73754 00:08:48.527 14:55:12 -- common/autotest_common.sh@960 -- # wait 73754 00:08:48.785 14:55:12 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:48.785 00:08:48.785 real 0m10.066s 00:08:48.785 user 0m14.241s 00:08:48.785 sys 0m3.521s 00:08:48.785 14:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.785 14:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:48.785 ************************************ 00:08:48.785 END TEST bdev_nbd 00:08:48.785 ************************************ 00:08:48.785 14:55:12 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:48.785 14:55:12 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:48.785 skipping fio tests on NVMe due to multi-ns failures. 00:08:48.785 14:55:12 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:48.785 14:55:12 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:48.785 14:55:12 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:48.785 14:55:12 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:48.785 14:55:12 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:48.785 14:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.785 14:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:48.785 ************************************ 00:08:48.785 START TEST bdev_verify 00:08:48.785 ************************************ 00:08:48.785 14:55:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:48.785 [2024-11-18 14:55:12.334610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.785 [2024-11-18 14:55:12.334720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74154 ] 00:08:49.042 [2024-11-18 14:55:12.483470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.042 [2024-11-18 14:55:12.514666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.042 [2024-11-18 14:55:12.514711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.608 Running I/O for 5 seconds... 00:08:54.881 00:08:54.881 Latency(us) 00:08:54.881 [2024-11-18T14:55:18.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x5e800 00:08:54.881 Nvme0n1p1 : 5.05 2539.27 9.92 0.00 0.00 50275.18 6805.66 52832.10 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x5e800 length 0x5e800 00:08:54.881 Nvme0n1p1 : 5.05 2552.62 9.97 0.00 0.00 50017.79 5923.45 51420.55 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x5e7ff 00:08:54.881 Nvme0n1p2 : 5.05 2538.20 9.91 0.00 0.00 50208.56 7914.73 49202.41 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:54.881 Nvme0n1p2 : 5.05 2551.88 9.97 0.00 0.00 49997.50 6074.68 52428.80 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0xa0000 00:08:54.881 Nvme1n1 : 5.05 2543.49 9.94 0.00 0.00 50075.59 3377.62 47589.22 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0xa0000 length 0xa0000 00:08:54.881 Nvme1n1 : 5.05 2551.21 9.97 0.00 0.00 49966.40 6452.78 52025.50 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x80000 00:08:54.881 Nvme2n1 : 5.06 2541.44 9.93 0.00 0.00 50042.18 6856.07 48194.17 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x80000 length 0x80000 00:08:54.881 Nvme2n1 : 5.06 2549.16 9.96 0.00 0.00 49918.75 9729.58 52832.10 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x80000 00:08:54.881 Nvme2n2 : 5.06 2539.54 9.92 0.00 0.00 50017.48 9679.16 49000.76 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x80000 length 0x80000 00:08:54.881 Nvme2n2 : 5.06 2547.28 9.95 0.00 0.00 49901.82 12703.90 54848.59 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x80000 00:08:54.881 Nvme2n3 : 5.06 2537.67 9.91 0.00 0.00 49992.72 12703.90 48194.17 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x80000 length 0x80000 00:08:54.881 Nvme2n3 : 5.06 2545.44 9.94 0.00 0.00 49882.51 15526.99 54041.99 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x0 length 0x20000 00:08:54.881 Nvme3n1 : 5.07 2535.96 9.91 0.00 0.00 49977.59 11796.48 48597.46 00:08:54.881 [2024-11-18T14:55:18.471Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:54.881 Verification LBA range: start 0x20000 length 0x20000 00:08:54.881 Nvme3n1 : 5.07 2543.72 9.94 0.00 0.00 49859.57 13712.15 52428.80 00:08:54.881 [2024-11-18T14:55:18.471Z] =================================================================================================================== 00:08:54.881 [2024-11-18T14:55:18.471Z] Total : 35616.86 139.13 0.00 0.00 50009.33 3377.62 54848.59 00:09:00.165 00:09:00.165 real 0m10.390s 00:09:00.165 user 0m19.987s 00:09:00.165 sys 0m0.255s 00:09:00.165 14:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.165 14:55:22 -- common/autotest_common.sh@10 -- # set +x 00:09:00.165 ************************************ 00:09:00.165 END TEST bdev_verify 00:09:00.165 ************************************ 00:09:00.165 14:55:22 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:00.165 14:55:22 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:00.165 14:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.165 14:55:22 -- common/autotest_common.sh@10 -- # set +x 00:09:00.165 ************************************ 00:09:00.165 START TEST bdev_verify_big_io 00:09:00.165 ************************************ 00:09:00.165 14:55:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:00.165 [2024-11-18 14:55:22.805851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.165 [2024-11-18 14:55:22.805971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74270 ] 00:09:00.165 [2024-11-18 14:55:22.955536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:00.165 [2024-11-18 14:55:22.993543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.165 [2024-11-18 14:55:22.993584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.165 Running I/O for 5 seconds... 00:09:05.454 00:09:05.454 Latency(us) 00:09:05.454 [2024-11-18T14:55:29.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x0 length 0x5e80 00:09:05.454 Nvme0n1p1 : 5.39 225.23 14.08 0.00 0.00 554738.36 78239.90 909841.33 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x5e80 length 0x5e80 00:09:05.454 Nvme0n1p1 : 5.33 277.08 17.32 0.00 0.00 453382.99 40733.14 651730.31 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x0 length 0x5e7f 00:09:05.454 Nvme0n1p2 : 5.42 231.11 14.44 0.00 0.00 534752.03 29239.14 838860.80 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:05.454 Nvme0n1p2 : 5.34 277.00 17.31 0.00 0.00 448269.04 41338.09 603334.50 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x0 length 0xa000 00:09:05.454 Nvme1n1 : 5.42 231.04 14.44 0.00 0.00 526950.80 29844.09 761427.50 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0xa000 length 0xa000 00:09:05.454 Nvme1n1 : 5.36 284.45 17.78 0.00 0.00 434609.63 18854.20 554938.68 00:09:05.454 [2024-11-18T14:55:29.044Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.454 Verification LBA range: start 0x0 length 0x8000 00:09:05.454 Nvme2n1 : 5.44 238.49 14.91 0.00 0.00 504489.80 12855.14 703352.52 00:09:05.454 [2024-11-18T14:55:29.045Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x8000 length 0x8000 00:09:05.455 Nvme2n1 : 5.36 284.38 17.77 0.00 0.00 429141.42 19459.15 509769.26 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x0 length 0x8000 00:09:05.455 Nvme2n2 : 5.45 247.34 15.46 0.00 0.00 478887.14 13409.67 632371.99 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x8000 length 0x8000 00:09:05.455 Nvme2n2 : 5.38 291.03 18.19 0.00 0.00 414683.44 23492.14 454920.66 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x0 length 0x8000 00:09:05.455 Nvme2n3 : 5.48 261.00 16.31 0.00 0.00 446373.56 11544.42 890483.00 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x8000 length 0x8000 00:09:05.455 Nvme2n3 : 5.39 290.95 18.18 0.00 0.00 409657.72 23996.26 422656.79 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x0 length 0x2000 00:09:05.455 Nvme3n1 : 5.53 323.15 20.20 0.00 0.00 355723.93 428.50 1174405.12 00:09:05.455 [2024-11-18T14:55:29.045Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:05.455 Verification LBA range: start 0x2000 length 0x2000 00:09:05.455 Nvme3n1 : 5.40 315.91 19.74 0.00 0.00 374300.99 1802.24 458147.05 00:09:05.455 [2024-11-18T14:55:29.045Z] =================================================================================================================== 00:09:05.455 [2024-11-18T14:55:29.045Z] Total : 3778.15 236.13 0.00 0.00 448295.16 428.50 1174405.12 00:09:07.432 00:09:07.432 real 0m7.688s 00:09:07.432 user 0m14.661s 00:09:07.432 sys 0m0.214s 00:09:07.432 14:55:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.432 14:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 ************************************ 00:09:07.432 END TEST bdev_verify_big_io 00:09:07.432 ************************************ 00:09:07.432 14:55:30 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.432 14:55:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:07.432 14:55:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.432 14:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 ************************************ 00:09:07.432 START TEST bdev_write_zeroes 00:09:07.432 ************************************ 00:09:07.432 14:55:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.432 [2024-11-18 14:55:30.546471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:07.432 [2024-11-18 14:55:30.546579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74380 ] 00:09:07.432 [2024-11-18 14:55:30.692542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.432 [2024-11-18 14:55:30.723244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.689 Running I/O for 1 seconds... 00:09:08.625 00:09:08.625 Latency(us) 00:09:08.625 [2024-11-18T14:55:32.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme0n1p1 : 1.06 7905.77 30.88 0.00 0.00 16043.36 5797.42 120182.94 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme0n1p2 : 1.06 7869.90 30.74 0.00 0.00 16092.72 5721.80 120989.54 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme1n1 : 1.05 8090.35 31.60 0.00 0.00 15703.81 9830.40 114536.76 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme2n1 : 1.05 8078.54 31.56 0.00 0.00 15703.19 9326.28 113730.17 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme2n2 : 1.06 8067.17 31.51 0.00 0.00 15701.16 9124.63 112923.57 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme2n3 : 1.06 8055.72 31.47 0.00 0.00 15684.48 9074.22 113730.17 00:09:08.625 [2024-11-18T14:55:32.215Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:08.625 Nvme3n1 : 1.06 8044.42 31.42 0.00 0.00 15676.79 9124.63 114536.76 00:09:08.625 [2024-11-18T14:55:32.215Z] =================================================================================================================== 00:09:08.625 [2024-11-18T14:55:32.215Z] Total : 56111.86 219.19 0.00 0.00 15799.42 5721.80 120989.54 00:09:08.885 ************************************ 00:09:08.885 END TEST bdev_write_zeroes 00:09:08.885 ************************************ 00:09:08.885 00:09:08.885 real 0m1.850s 00:09:08.885 user 0m1.568s 00:09:08.885 sys 0m0.168s 00:09:08.885 14:55:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.885 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:09:08.885 14:55:32 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.885 14:55:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:08.885 14:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.885 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:09:08.885 ************************************ 00:09:08.885 START TEST bdev_json_nonenclosed 00:09:08.885 ************************************ 00:09:08.885 14:55:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:08.885 [2024-11-18 14:55:32.442637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.885 [2024-11-18 14:55:32.442740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74417 ] 00:09:09.145 [2024-11-18 14:55:32.591223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.145 [2024-11-18 14:55:32.623500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.145 [2024-11-18 14:55:32.623644] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:09.145 [2024-11-18 14:55:32.623671] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.145 ************************************ 00:09:09.145 END TEST bdev_json_nonenclosed 00:09:09.145 ************************************ 00:09:09.145 00:09:09.145 real 0m0.312s 00:09:09.145 user 0m0.114s 00:09:09.145 sys 0m0.095s 00:09:09.145 14:55:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.145 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.406 14:55:32 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.406 14:55:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:09.406 14:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.406 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.406 ************************************ 00:09:09.406 START TEST bdev_json_nonarray 00:09:09.406 ************************************ 00:09:09.406 14:55:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.406 [2024-11-18 14:55:32.817614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.406 [2024-11-18 14:55:32.817748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74444 ] 00:09:09.406 [2024-11-18 14:55:32.966706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.668 [2024-11-18 14:55:33.001340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.668 [2024-11-18 14:55:33.001514] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:09.668 [2024-11-18 14:55:33.001540] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.668 00:09:09.668 real 0m0.340s 00:09:09.668 user 0m0.141s 00:09:09.668 sys 0m0.096s 00:09:09.668 ************************************ 00:09:09.668 END TEST bdev_json_nonarray 00:09:09.668 ************************************ 00:09:09.668 14:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.668 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:09.668 14:55:33 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:09.668 14:55:33 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:09.668 14:55:33 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:09.668 14:55:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:09.668 14:55:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.668 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:09.668 ************************************ 00:09:09.668 START TEST bdev_gpt_uuid 00:09:09.668 ************************************ 00:09:09.668 14:55:33 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:09.668 14:55:33 -- bdev/blockdev.sh@612 -- # local bdev 00:09:09.668 14:55:33 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:09.668 14:55:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=74464 00:09:09.668 14:55:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:09.668 14:55:33 -- bdev/blockdev.sh@47 -- # waitforlisten 74464 00:09:09.668 14:55:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:09.668 14:55:33 -- common/autotest_common.sh@829 -- # '[' -z 74464 ']' 00:09:09.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.668 14:55:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.668 14:55:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.668 14:55:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.668 14:55:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.668 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:09:09.668 [2024-11-18 14:55:33.249499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.668 [2024-11-18 14:55:33.249664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74464 ] 00:09:09.928 [2024-11-18 14:55:33.403544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.928 [2024-11-18 14:55:33.454211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.928 [2024-11-18 14:55:33.454485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.501 14:55:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.501 14:55:34 -- common/autotest_common.sh@862 -- # return 0 00:09:10.501 14:55:34 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:10.501 14:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.501 14:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 Some configs were skipped because the RPC state that can call them passed over. 00:09:11.073 14:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:11.073 14:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 14:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 14:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:11.073 14:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 14:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 14:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:11.073 { 00:09:11.073 "name": "Nvme0n1p1", 00:09:11.073 "aliases": [ 00:09:11.073 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:11.073 ], 00:09:11.073 "product_name": "GPT Disk", 00:09:11.073 "block_size": 4096, 00:09:11.073 "num_blocks": 774144, 00:09:11.073 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.073 "md_size": 64, 00:09:11.073 "md_interleave": false, 00:09:11.073 "dif_type": 0, 00:09:11.073 "assigned_rate_limits": { 00:09:11.073 "rw_ios_per_sec": 0, 00:09:11.073 "rw_mbytes_per_sec": 0, 00:09:11.073 "r_mbytes_per_sec": 0, 00:09:11.073 "w_mbytes_per_sec": 0 00:09:11.073 }, 00:09:11.073 "claimed": false, 00:09:11.073 "zoned": false, 00:09:11.073 "supported_io_types": { 00:09:11.073 "read": true, 00:09:11.073 "write": true, 00:09:11.073 "unmap": true, 00:09:11.073 "write_zeroes": true, 00:09:11.073 "flush": true, 00:09:11.073 "reset": true, 00:09:11.073 "compare": true, 00:09:11.073 "compare_and_write": false, 00:09:11.073 "abort": true, 00:09:11.073 "nvme_admin": false, 00:09:11.073 "nvme_io": false 00:09:11.073 }, 00:09:11.073 "driver_specific": { 00:09:11.073 "gpt": { 00:09:11.073 "base_bdev": "Nvme0n1", 00:09:11.073 "offset_blocks": 256, 00:09:11.073 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:11.073 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.073 "partition_name": "SPDK_TEST_first" 00:09:11.073 } 00:09:11.073 } 00:09:11.073 } 00:09:11.073 ]' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:11.073 14:55:34 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:11.073 14:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.073 14:55:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 14:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:11.073 { 00:09:11.073 "name": "Nvme0n1p2", 00:09:11.073 "aliases": [ 00:09:11.073 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:11.073 ], 00:09:11.073 "product_name": "GPT Disk", 00:09:11.073 "block_size": 4096, 00:09:11.073 "num_blocks": 774143, 00:09:11.073 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.073 "md_size": 64, 00:09:11.073 "md_interleave": false, 00:09:11.073 "dif_type": 0, 00:09:11.073 "assigned_rate_limits": { 00:09:11.073 "rw_ios_per_sec": 0, 00:09:11.073 "rw_mbytes_per_sec": 0, 00:09:11.073 "r_mbytes_per_sec": 0, 00:09:11.073 "w_mbytes_per_sec": 0 00:09:11.073 }, 00:09:11.073 "claimed": false, 00:09:11.073 "zoned": false, 00:09:11.073 "supported_io_types": { 00:09:11.073 "read": true, 00:09:11.073 "write": true, 00:09:11.073 "unmap": true, 00:09:11.073 "write_zeroes": true, 00:09:11.073 "flush": true, 00:09:11.073 "reset": true, 00:09:11.073 "compare": true, 00:09:11.073 "compare_and_write": false, 00:09:11.073 "abort": true, 00:09:11.073 "nvme_admin": false, 00:09:11.073 "nvme_io": false 00:09:11.073 }, 00:09:11.073 "driver_specific": { 00:09:11.073 "gpt": { 00:09:11.073 "base_bdev": "Nvme0n1", 00:09:11.073 "offset_blocks": 774400, 00:09:11.073 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:11.073 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.073 "partition_name": "SPDK_TEST_second" 00:09:11.073 } 00:09:11.073 } 00:09:11.073 } 00:09:11.073 ]' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:11.073 14:55:34 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.073 14:55:34 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.073 14:55:34 -- bdev/blockdev.sh@629 -- # killprocess 74464 00:09:11.073 14:55:34 -- common/autotest_common.sh@936 -- # '[' -z 74464 ']' 00:09:11.073 14:55:34 -- common/autotest_common.sh@940 -- # kill -0 74464 00:09:11.073 14:55:34 -- common/autotest_common.sh@941 -- # uname 00:09:11.073 14:55:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.073 14:55:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74464 00:09:11.073 14:55:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.073 14:55:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.073 killing process with pid 74464 00:09:11.073 14:55:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74464' 00:09:11.073 14:55:34 -- common/autotest_common.sh@955 -- # kill 74464 00:09:11.073 14:55:34 -- common/autotest_common.sh@960 -- # wait 74464 00:09:11.646 00:09:11.646 real 0m1.857s 00:09:11.646 user 0m1.927s 00:09:11.646 sys 0m0.451s 00:09:11.646 ************************************ 00:09:11.646 END TEST bdev_gpt_uuid 00:09:11.646 ************************************ 00:09:11.646 14:55:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.646 14:55:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.646 14:55:35 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:11.646 14:55:35 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:11.646 14:55:35 -- bdev/blockdev.sh@809 -- # cleanup 00:09:11.646 14:55:35 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:11.646 14:55:35 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.646 14:55:35 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:11.646 14:55:35 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:11.646 14:55:35 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:11.646 14:55:35 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.219 Waiting for block devices as requested 00:09:12.219 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.219 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.480 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.480 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.767 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:17.767 14:55:41 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:17.767 14:55:41 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:17.767 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:17.767 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:17.767 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:17.767 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:17.767 14:55:41 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:17.767 00:09:17.767 real 0m52.176s 00:09:17.767 user 1m8.292s 00:09:17.767 sys 0m7.997s 00:09:17.767 14:55:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.767 ************************************ 00:09:17.767 END TEST blockdev_nvme_gpt 00:09:17.767 ************************************ 00:09:17.767 14:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:18.027 14:55:41 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.027 14:55:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.027 14:55:41 -- common/autotest_common.sh@10 -- # set +x 00:09:18.027 ************************************ 00:09:18.027 START TEST nvme 00:09:18.027 ************************************ 00:09:18.027 14:55:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.027 * Looking for test storage... 00:09:18.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.027 14:55:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:18.027 14:55:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:18.027 14:55:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:18.027 14:55:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:18.027 14:55:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:18.027 14:55:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:18.027 14:55:41 -- scripts/common.sh@335 -- # IFS=.-: 00:09:18.027 14:55:41 -- scripts/common.sh@335 -- # read -ra ver1 00:09:18.027 14:55:41 -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.027 14:55:41 -- scripts/common.sh@336 -- # read -ra ver2 00:09:18.027 14:55:41 -- scripts/common.sh@337 -- # local 'op=<' 00:09:18.027 14:55:41 -- scripts/common.sh@339 -- # ver1_l=2 00:09:18.027 14:55:41 -- scripts/common.sh@340 -- # ver2_l=1 00:09:18.027 14:55:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:18.027 14:55:41 -- scripts/common.sh@343 -- # case "$op" in 00:09:18.027 14:55:41 -- scripts/common.sh@344 -- # : 1 00:09:18.027 14:55:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:18.027 14:55:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.027 14:55:41 -- scripts/common.sh@364 -- # decimal 1 00:09:18.027 14:55:41 -- scripts/common.sh@352 -- # local d=1 00:09:18.027 14:55:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.027 14:55:41 -- scripts/common.sh@354 -- # echo 1 00:09:18.027 14:55:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:18.027 14:55:41 -- scripts/common.sh@365 -- # decimal 2 00:09:18.027 14:55:41 -- scripts/common.sh@352 -- # local d=2 00:09:18.027 14:55:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.027 14:55:41 -- scripts/common.sh@354 -- # echo 2 00:09:18.027 14:55:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:18.027 14:55:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:18.027 14:55:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:18.027 14:55:41 -- scripts/common.sh@367 -- # return 0 00:09:18.027 14:55:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:18.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.027 --rc genhtml_branch_coverage=1 00:09:18.027 --rc genhtml_function_coverage=1 00:09:18.027 --rc genhtml_legend=1 00:09:18.027 --rc geninfo_all_blocks=1 00:09:18.027 --rc geninfo_unexecuted_blocks=1 00:09:18.027 00:09:18.027 ' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:18.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.027 --rc genhtml_branch_coverage=1 00:09:18.027 --rc genhtml_function_coverage=1 00:09:18.027 --rc genhtml_legend=1 00:09:18.027 --rc geninfo_all_blocks=1 00:09:18.027 --rc geninfo_unexecuted_blocks=1 00:09:18.027 00:09:18.027 ' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:18.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.027 --rc genhtml_branch_coverage=1 00:09:18.027 --rc genhtml_function_coverage=1 00:09:18.027 --rc genhtml_legend=1 00:09:18.027 --rc geninfo_all_blocks=1 00:09:18.027 --rc geninfo_unexecuted_blocks=1 00:09:18.027 00:09:18.027 ' 00:09:18.027 14:55:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:18.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.027 --rc genhtml_branch_coverage=1 00:09:18.027 --rc genhtml_function_coverage=1 00:09:18.027 --rc genhtml_legend=1 00:09:18.027 --rc geninfo_all_blocks=1 00:09:18.027 --rc geninfo_unexecuted_blocks=1 00:09:18.027 00:09:18.027 ' 00:09:18.027 14:55:41 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.229 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.229 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.229 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.229 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.229 14:55:42 -- nvme/nvme.sh@79 -- # uname 00:09:19.491 Waiting for stub to ready for secondary processes... 00:09:19.491 14:55:42 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:19.491 14:55:42 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:19.491 14:55:42 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.491 14:55:42 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.491 14:55:42 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:19.491 14:55:42 -- common/autotest_common.sh@1055 -- # echo 0 00:09:19.491 14:55:42 -- common/autotest_common.sh@1057 -- # stubpid=75120 00:09:19.491 14:55:42 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:19.491 14:55:42 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:19.491 14:55:42 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:19.491 14:55:42 -- common/autotest_common.sh@1061 -- # [[ -e /proc/75120 ]] 00:09:19.491 14:55:42 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:19.491 [2024-11-18 14:55:42.860635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.491 [2024-11-18 14:55:42.860784] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.437 14:55:43 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:20.437 14:55:43 -- common/autotest_common.sh@1061 -- # [[ -e /proc/75120 ]] 00:09:20.437 14:55:43 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:21.011 [2024-11-18 14:55:44.372564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.011 [2024-11-18 14:55:44.405409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.011 [2024-11-18 14:55:44.405665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.011 [2024-11-18 14:55:44.405774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.011 [2024-11-18 14:55:44.420661] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.011 [2024-11-18 14:55:44.437028] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:21.011 [2024-11-18 14:55:44.437561] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:21.011 [2024-11-18 14:55:44.444982] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.011 [2024-11-18 14:55:44.446529] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:21.011 [2024-11-18 14:55:44.446688] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:21.011 [2024-11-18 14:55:44.449433] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.011 [2024-11-18 14:55:44.449939] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:21.011 [2024-11-18 14:55:44.450213] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:21.011 [2024-11-18 14:55:44.456963] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:21.011 [2024-11-18 14:55:44.457363] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:21.011 [2024-11-18 14:55:44.458453] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:21.011 [2024-11-18 14:55:44.459195] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:21.011 [2024-11-18 14:55:44.459376] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:21.272 done. 00:09:21.272 14:55:44 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:21.272 14:55:44 -- common/autotest_common.sh@1064 -- # echo done. 00:09:21.272 14:55:44 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:21.272 14:55:44 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:21.272 14:55:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.272 14:55:44 -- common/autotest_common.sh@10 -- # set +x 00:09:21.272 ************************************ 00:09:21.272 START TEST nvme_reset 00:09:21.272 ************************************ 00:09:21.272 14:55:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:21.532 Initializing NVMe Controllers 00:09:21.532 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:21.532 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:21.532 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:21.532 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:21.532 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:21.532 ************************************ 00:09:21.532 END TEST nvme_reset 00:09:21.532 ************************************ 00:09:21.532 00:09:21.532 real 0m0.214s 00:09:21.532 user 0m0.061s 00:09:21.532 sys 0m0.102s 00:09:21.532 14:55:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.532 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:21.532 14:55:45 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:21.532 14:55:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.532 14:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.532 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:21.532 ************************************ 00:09:21.532 START TEST nvme_identify 00:09:21.532 ************************************ 00:09:21.532 14:55:45 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:21.532 14:55:45 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:21.532 14:55:45 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:21.532 14:55:45 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.532 14:55:45 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:21.532 14:55:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:21.532 14:55:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:21.532 14:55:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.532 14:55:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.532 14:55:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:21.794 14:55:45 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:21.794 14:55:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:21.794 14:55:45 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:21.794 [2024-11-18 14:55:45.326083] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 75162 terminated unexpected 00:09:21.794 ===================================================== 00:09:21.794 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:21.794 ===================================================== 00:09:21.794 Controller Capabilities/Features 00:09:21.794 ================================ 00:09:21.794 Vendor ID: 1b36 00:09:21.794 Subsystem Vendor ID: 1af4 00:09:21.794 Serial Number: 12343 00:09:21.794 Model Number: QEMU NVMe Ctrl 00:09:21.794 Firmware Version: 8.0.0 00:09:21.794 Recommended Arb Burst: 6 00:09:21.794 IEEE OUI Identifier: 00 54 52 00:09:21.794 Multi-path I/O 00:09:21.794 May have multiple subsystem ports: No 00:09:21.794 May have multiple controllers: Yes 00:09:21.794 Associated with SR-IOV VF: No 00:09:21.794 Max Data Transfer Size: 524288 00:09:21.794 Max Number of Namespaces: 256 00:09:21.794 Max Number of I/O Queues: 64 00:09:21.794 NVMe Specification Version (VS): 1.4 00:09:21.794 NVMe Specification Version (Identify): 1.4 00:09:21.794 Maximum Queue Entries: 2048 00:09:21.794 Contiguous Queues Required: Yes 00:09:21.794 Arbitration Mechanisms Supported 00:09:21.794 Weighted Round Robin: Not Supported 00:09:21.794 Vendor Specific: Not Supported 00:09:21.794 Reset Timeout: 7500 ms 00:09:21.794 Doorbell Stride: 4 bytes 00:09:21.794 NVM Subsystem Reset: Not Supported 00:09:21.794 Command Sets Supported 00:09:21.794 NVM Command Set: Supported 00:09:21.794 Boot Partition: Not Supported 00:09:21.794 Memory Page Size Minimum: 4096 bytes 00:09:21.794 Memory Page Size Maximum: 65536 bytes 00:09:21.794 Persistent Memory Region: Not Supported 00:09:21.794 Optional Asynchronous Events Supported 00:09:21.795 Namespace Attribute Notices: Supported 00:09:21.795 Firmware Activation Notices: Not Supported 00:09:21.795 ANA Change Notices: Not Supported 00:09:21.795 PLE Aggregate Log Change Notices: Not Supported 00:09:21.795 LBA Status Info Alert Notices: Not Supported 00:09:21.795 EGE Aggregate Log Change Notices: Not Supported 00:09:21.795 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.795 Zone Descriptor Change Notices: Not Supported 00:09:21.795 Discovery Log Change Notices: Not Supported 00:09:21.795 Controller Attributes 00:09:21.795 128-bit Host Identifier: Not Supported 00:09:21.795 Non-Operational Permissive Mode: Not Supported 00:09:21.795 NVM Sets: Not Supported 00:09:21.795 Read Recovery Levels: Not Supported 00:09:21.795 Endurance Groups: Supported 00:09:21.795 Predictable Latency Mode: Not Supported 00:09:21.795 Traffic Based Keep ALive: Not Supported 00:09:21.795 Namespace Granularity: Not Supported 00:09:21.795 SQ Associations: Not Supported 00:09:21.795 UUID List: Not Supported 00:09:21.795 Multi-Domain Subsystem: Not Supported 00:09:21.795 Fixed Capacity Management: Not Supported 00:09:21.795 Variable Capacity Management: Not Supported 00:09:21.795 Delete Endurance Group: Not Supported 00:09:21.795 Delete NVM Set: Not Supported 00:09:21.795 Extended LBA Formats Supported: Supported 00:09:21.795 Flexible Data Placement Supported: Supported 00:09:21.795 00:09:21.795 Controller Memory Buffer Support 00:09:21.795 ================================ 00:09:21.795 Supported: No 00:09:21.795 00:09:21.795 Persistent Memory Region Support 00:09:21.795 ================================ 00:09:21.795 Supported: No 00:09:21.795 00:09:21.795 Admin Command Set Attributes 00:09:21.795 ============================ 00:09:21.795 Security Send/Receive: Not Supported 00:09:21.795 Format NVM: Supported 00:09:21.795 Firmware Activate/Download: Not Supported 00:09:21.795 Namespace Management: Supported 00:09:21.795 Device Self-Test: Not Supported 00:09:21.795 Directives: Supported 00:09:21.795 NVMe-MI: Not Supported 00:09:21.795 Virtualization Management: Not Supported 00:09:21.795 Doorbell Buffer Config: Supported 00:09:21.795 Get LBA Status Capability: Not Supported 00:09:21.795 Command & Feature Lockdown Capability: Not Supported 00:09:21.795 Abort Command Limit: 4 00:09:21.795 Async Event Request Limit: 4 00:09:21.795 Number of Firmware Slots: N/A 00:09:21.795 Firmware Slot 1 Read-Only: N/A 00:09:21.795 Firmware Activation Without Reset: N/A 00:09:21.795 Multiple Update Detection Support: N/A 00:09:21.795 Firmware Update Granularity: No Information Provided 00:09:21.795 Per-Namespace SMART Log: Yes 00:09:21.795 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.795 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.795 Command Effects Log Page: Supported 00:09:21.795 Get Log Page Extended Data: Supported 00:09:21.795 Telemetry Log Pages: Not Supported 00:09:21.795 Persistent Event Log Pages: Not Supported 00:09:21.795 Supported Log Pages Log Page: May Support 00:09:21.795 Commands Supported & Effects Log Page: Not Supported 00:09:21.795 Feature Identifiers & Effects Log Page:May Support 00:09:21.795 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.795 Data Area 4 for Telemetry Log: Not Supported 00:09:21.795 Error Log Page Entries Supported: 1 00:09:21.795 Keep Alive: Not Supported 00:09:21.795 00:09:21.795 NVM Command Set Attributes 00:09:21.795 ========================== 00:09:21.795 Submission Queue Entry Size 00:09:21.795 Max: 64 00:09:21.795 Min: 64 00:09:21.795 Completion Queue Entry Size 00:09:21.795 Max: 16 00:09:21.795 Min: 16 00:09:21.795 Number of Namespaces: 256 00:09:21.795 Compare Command: Supported 00:09:21.795 Write Uncorrectable Command: Not Supported 00:09:21.795 Dataset Management Command: Supported 00:09:21.795 Write Zeroes Command: Supported 00:09:21.795 Set Features Save Field: Supported 00:09:21.795 Reservations: Not Supported 00:09:21.795 Timestamp: Supported 00:09:21.795 Copy: Supported 00:09:21.795 Volatile Write Cache: Present 00:09:21.795 Atomic Write Unit (Normal): 1 00:09:21.795 Atomic Write Unit (PFail): 1 00:09:21.795 Atomic Compare & Write Unit: 1 00:09:21.795 Fused Compare & Write: Not Supported 00:09:21.795 Scatter-Gather List 00:09:21.795 SGL Command Set: Supported 00:09:21.795 SGL Keyed: Not Supported 00:09:21.795 SGL Bit Bucket Descriptor: Not Supported 00:09:21.795 SGL Metadata Pointer: Not Supported 00:09:21.795 Oversized SGL: Not Supported 00:09:21.795 SGL Metadata Address: Not Supported 00:09:21.795 SGL Offset: Not Supported 00:09:21.795 Transport SGL Data Block: Not Supported 00:09:21.795 Replay Protected Memory Block: Not Supported 00:09:21.795 00:09:21.795 Firmware Slot Information 00:09:21.795 ========================= 00:09:21.795 Active slot: 1 00:09:21.795 Slot 1 Firmware Revision: 1.0 00:09:21.795 00:09:21.795 00:09:21.795 Commands Supported and Effects 00:09:21.795 ============================== 00:09:21.795 Admin Commands 00:09:21.795 -------------- 00:09:21.795 Delete I/O Submission Queue (00h): Supported 00:09:21.795 Create I/O Submission Queue (01h): Supported 00:09:21.795 Get Log Page (02h): Supported 00:09:21.795 Delete I/O Completion Queue (04h): Supported 00:09:21.795 Create I/O Completion Queue (05h): Supported 00:09:21.795 Identify (06h): Supported 00:09:21.795 Abort (08h): Supported 00:09:21.795 Set Features (09h): Supported 00:09:21.795 Get Features (0Ah): Supported 00:09:21.795 Asynchronous Event Request (0Ch): Supported 00:09:21.795 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.795 Directive Send (19h): Supported 00:09:21.795 Directive Receive (1Ah): Supported 00:09:21.795 Virtualization Management (1Ch): Supported 00:09:21.795 Doorbell Buffer Config (7Ch): Supported 00:09:21.795 Format NVM (80h): Supported LBA-Change 00:09:21.795 I/O Commands 00:09:21.795 ------------ 00:09:21.795 Flush (00h): Supported LBA-Change 00:09:21.795 Write (01h): Supported LBA-Change 00:09:21.795 Read (02h): Supported 00:09:21.795 Compare (05h): Supported 00:09:21.795 Write Zeroes (08h): Supported LBA-Change 00:09:21.795 Dataset Management (09h): Supported LBA-Change 00:09:21.795 Unknown (0Ch): Supported 00:09:21.795 Unknown (12h): Supported 00:09:21.795 Copy (19h): Supported LBA-Change 00:09:21.795 Unknown (1Dh): Supported LBA-Change 00:09:21.795 00:09:21.795 Error Log 00:09:21.795 ========= 00:09:21.795 00:09:21.795 Arbitration 00:09:21.795 =========== 00:09:21.795 Arbitration Burst: no limit 00:09:21.795 00:09:21.795 Power Management 00:09:21.795 ================ 00:09:21.795 Number of Power States: 1 00:09:21.795 Current Power State: Power State #0 00:09:21.795 Power State #0: 00:09:21.795 Max Power: 25.00 W 00:09:21.795 Non-Operational State: Operational 00:09:21.795 Entry Latency: 16 microseconds 00:09:21.795 Exit Latency: 4 microseconds 00:09:21.795 Relative Read Throughput: 0 00:09:21.795 Relative Read Latency: 0 00:09:21.795 Relative Write Throughput: 0 00:09:21.795 Relative Write Latency: 0 00:09:21.795 Idle Power: Not Reported 00:09:21.795 Active Power: Not Reported 00:09:21.795 Non-Operational Permissive Mode: Not Supported 00:09:21.795 00:09:21.795 Health Information 00:09:21.795 ================== 00:09:21.795 Critical Warnings: 00:09:21.795 Available Spare Space: OK 00:09:21.795 Temperature: OK 00:09:21.795 Device Reliability: OK 00:09:21.795 Read Only: No 00:09:21.795 Volatile Memory Backup: OK 00:09:21.795 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.795 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.795 Available Spare: 0% 00:09:21.795 Available Spare Threshold: 0% 00:09:21.795 Life Percentage Used: 0% 00:09:21.795 Data Units Read: 1454 00:09:21.795 Data Units Written: 676 00:09:21.795 Host Read Commands: 63280 00:09:21.795 Host Write Commands: 31148 00:09:21.795 Controller Busy Time: 0 minutes 00:09:21.795 Power Cycles: 0 00:09:21.795 Power On Hours: 0 hours 00:09:21.795 Unsafe Shutdowns: 0 00:09:21.795 Unrecoverable Media Errors: 0 00:09:21.795 Lifetime Error Log Entries: 0 00:09:21.795 Warning Temperature Time: 0 minutes 00:09:21.795 Critical Temperature Time: 0 minutes 00:09:21.795 00:09:21.795 Number of Queues 00:09:21.795 ================ 00:09:21.795 Number of I/O Submission Queues: 64 00:09:21.795 Number of I/O Completion Queues: 64 00:09:21.795 00:09:21.795 ZNS Specific Controller Data 00:09:21.795 ============================ 00:09:21.795 Zone Append Size Limit: 0 00:09:21.795 00:09:21.795 00:09:21.795 Active Namespaces 00:09:21.795 ================= 00:09:21.795 Namespace ID:1 00:09:21.796 Error Recovery Timeout: Unlimited 00:09:21.796 Command Set Identifier: NVM (00h) 00:09:21.796 Deallocate: Supported 00:09:21.796 Deallocated/Unwritten Error: Supported 00:09:21.796 Deallocated Read Value: All 0x00 00:09:21.796 Deallocate in Write Zeroes: Not Supported 00:09:21.796 Deallocated Guard Field: 0xFFFF 00:09:21.796 Flush: Supported 00:09:21.796 Reservation: Not Supported 00:09:21.796 Namespace Sharing Capabilities: Multiple Controllers 00:09:21.796 Size (in LBAs): 262144 (1GiB) 00:09:21.796 Capacity (in LBAs): 262144 (1GiB) 00:09:21.796 Utilization (in LBAs): 262144 (1GiB) 00:09:21.796 Thin Provisioning: Not Supported 00:09:21.796 Per-NS Atomic Units: No 00:09:21.796 Maximum Single Source Range Length: 128 00:09:21.796 Maximum Copy Length: 128 00:09:21.796 Maximum Source Range Count: 128 00:09:21.796 NGUID/EUI64 Never Reused: No 00:09:21.796 Namespace Write Protected: No 00:09:21.796 Endurance group ID: 1 00:09:21.796 Number of LBA Formats: 8 00:09:21.796 Current LBA Format: LBA Format #04 00:09:21.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.796 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.796 00:09:21.796 Get Feature FDP: 00:09:21.796 ================ 00:09:21.796 Enabled: Yes 00:09:21.796 FDP configuration index: 0 00:09:21.796 00:09:21.796 FDP configurations log page 00:09:21.796 =========================== 00:09:21.796 Number of FDP configurations: 1 00:09:21.796 Version: 0 00:09:21.796 Size: 112 00:09:21.796 FDP Configuration Descriptor: 0 00:09:21.796 Descriptor Size: 96 00:09:21.796 Reclaim Group Identifier format: 2 00:09:21.796 FDP Volatile Write Cache: Not Present 00:09:21.796 FDP Configuration: Valid 00:09:21.796 Vendor Specific Size: 0 00:09:21.796 Number of Reclaim Groups: 2 00:09:21.796 Number of Recalim Unit Handles: 8 00:09:21.796 Max Placement Identifiers: 128 00:09:21.796 Number of Namespaces Suppprted: 256 00:09:21.796 Reclaim unit Nominal Size: 6000000 bytes 00:09:21.796 Estimated Reclaim Unit Time Limit: Not Reported 00:09:21.796 RUH Desc #000: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #001: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #002: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #003: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #004: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #005: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #006: RUH Type: Initially Isolated 00:09:21.796 RUH Desc #007: RUH Type: Initially Isolated 00:09:21.796 00:09:21.796 FDP reclaim unit handle usage log page 00:09:21.796 =================================[2024-11-18 14:55:45.328843] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 75162 terminated unexpected 00:09:21.796 ===== 00:09:21.796 Number of Reclaim Unit Handles: 8 00:09:21.796 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:21.796 RUH Usage Desc #001: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #002: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #003: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #004: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #005: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #006: RUH Attributes: Unused 00:09:21.796 RUH Usage Desc #007: RUH Attributes: Unused 00:09:21.796 00:09:21.796 FDP statistics log page 00:09:21.796 ======================= 00:09:21.796 Host bytes with metadata written: 446279680 00:09:21.796 Media bytes with metadata written: 446361600 00:09:21.796 Media bytes erased: 0 00:09:21.796 00:09:21.796 FDP events log page 00:09:21.796 =================== 00:09:21.796 Number of FDP events: 0 00:09:21.796 00:09:21.796 ===================================================== 00:09:21.796 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:21.796 ===================================================== 00:09:21.796 Controller Capabilities/Features 00:09:21.796 ================================ 00:09:21.796 Vendor ID: 1b36 00:09:21.796 Subsystem Vendor ID: 1af4 00:09:21.796 Serial Number: 12340 00:09:21.796 Model Number: QEMU NVMe Ctrl 00:09:21.796 Firmware Version: 8.0.0 00:09:21.796 Recommended Arb Burst: 6 00:09:21.796 IEEE OUI Identifier: 00 54 52 00:09:21.796 Multi-path I/O 00:09:21.796 May have multiple subsystem ports: No 00:09:21.796 May have multiple controllers: No 00:09:21.796 Associated with SR-IOV VF: No 00:09:21.796 Max Data Transfer Size: 524288 00:09:21.796 Max Number of Namespaces: 256 00:09:21.796 Max Number of I/O Queues: 64 00:09:21.796 NVMe Specification Version (VS): 1.4 00:09:21.796 NVMe Specification Version (Identify): 1.4 00:09:21.796 Maximum Queue Entries: 2048 00:09:21.796 Contiguous Queues Required: Yes 00:09:21.796 Arbitration Mechanisms Supported 00:09:21.796 Weighted Round Robin: Not Supported 00:09:21.796 Vendor Specific: Not Supported 00:09:21.796 Reset Timeout: 7500 ms 00:09:21.796 Doorbell Stride: 4 bytes 00:09:21.796 NVM Subsystem Reset: Not Supported 00:09:21.796 Command Sets Supported 00:09:21.796 NVM Command Set: Supported 00:09:21.796 Boot Partition: Not Supported 00:09:21.796 Memory Page Size Minimum: 4096 bytes 00:09:21.796 Memory Page Size Maximum: 65536 bytes 00:09:21.796 Persistent Memory Region: Not Supported 00:09:21.796 Optional Asynchronous Events Supported 00:09:21.796 Namespace Attribute Notices: Supported 00:09:21.796 Firmware Activation Notices: Not Supported 00:09:21.796 ANA Change Notices: Not Supported 00:09:21.796 PLE Aggregate Log Change Notices: Not Supported 00:09:21.796 LBA Status Info Alert Notices: Not Supported 00:09:21.796 EGE Aggregate Log Change Notices: Not Supported 00:09:21.796 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.796 Zone Descriptor Change Notices: Not Supported 00:09:21.796 Discovery Log Change Notices: Not Supported 00:09:21.796 Controller Attributes 00:09:21.796 128-bit Host Identifier: Not Supported 00:09:21.796 Non-Operational Permissive Mode: Not Supported 00:09:21.796 NVM Sets: Not Supported 00:09:21.796 Read Recovery Levels: Not Supported 00:09:21.796 Endurance Groups: Not Supported 00:09:21.796 Predictable Latency Mode: Not Supported 00:09:21.796 Traffic Based Keep ALive: Not Supported 00:09:21.796 Namespace Granularity: Not Supported 00:09:21.796 SQ Associations: Not Supported 00:09:21.796 UUID List: Not Supported 00:09:21.796 Multi-Domain Subsystem: Not Supported 00:09:21.796 Fixed Capacity Management: Not Supported 00:09:21.796 Variable Capacity Management: Not Supported 00:09:21.796 Delete Endurance Group: Not Supported 00:09:21.796 Delete NVM Set: Not Supported 00:09:21.796 Extended LBA Formats Supported: Supported 00:09:21.796 Flexible Data Placement Supported: Not Supported 00:09:21.796 00:09:21.796 Controller Memory Buffer Support 00:09:21.796 ================================ 00:09:21.796 Supported: No 00:09:21.796 00:09:21.796 Persistent Memory Region Support 00:09:21.796 ================================ 00:09:21.796 Supported: No 00:09:21.796 00:09:21.796 Admin Command Set Attributes 00:09:21.796 ============================ 00:09:21.796 Security Send/Receive: Not Supported 00:09:21.796 Format NVM: Supported 00:09:21.796 Firmware Activate/Download: Not Supported 00:09:21.796 Namespace Management: Supported 00:09:21.796 Device Self-Test: Not Supported 00:09:21.796 Directives: Supported 00:09:21.796 NVMe-MI: Not Supported 00:09:21.796 Virtualization Management: Not Supported 00:09:21.796 Doorbell Buffer Config: Supported 00:09:21.796 Get LBA Status Capability: Not Supported 00:09:21.796 Command & Feature Lockdown Capability: Not Supported 00:09:21.796 Abort Command Limit: 4 00:09:21.796 Async Event Request Limit: 4 00:09:21.796 Number of Firmware Slots: N/A 00:09:21.796 Firmware Slot 1 Read-Only: N/A 00:09:21.796 Firmware Activation Without Reset: N/A 00:09:21.796 Multiple Update Detection Support: N/A 00:09:21.796 Firmware Update Granularity: No Information Provided 00:09:21.796 Per-Namespace SMART Log: Yes 00:09:21.796 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.796 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:21.796 Command Effects Log Page: Supported 00:09:21.796 Get Log Page Extended Data: Supported 00:09:21.796 Telemetry Log Pages: Not Supported 00:09:21.796 Persistent Event Log Pages: Not Supported 00:09:21.796 Supported Log Pages Log Page: May Support 00:09:21.796 Commands Supported & Effects Log Page: Not Supported 00:09:21.796 Feature Identifiers & Effects Log Page:May Support 00:09:21.796 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.796 Data Area 4 for Telemetry Log: Not Supported 00:09:21.796 Error Log Page Entries Supported: 1 00:09:21.797 Keep Alive: Not Supported 00:09:21.797 00:09:21.797 NVM Command Set Attributes 00:09:21.797 ========================== 00:09:21.797 Submission Queue Entry Size 00:09:21.797 Max: 64 00:09:21.797 Min: 64 00:09:21.797 Completion Queue Entry Size 00:09:21.797 Max: 16 00:09:21.797 Min: 16 00:09:21.797 Number of Namespaces: 256 00:09:21.797 Compare Command: Supported 00:09:21.797 Write Uncorrectable Command: Not Supported 00:09:21.797 Dataset Management Command: Supported 00:09:21.797 Write Zeroes Command: Supported 00:09:21.797 Set Features Save Field: Supported 00:09:21.797 Reservations: Not Supported 00:09:21.797 Timestamp: Supported 00:09:21.797 Copy: Supported 00:09:21.797 Volatile Write Cache: Present 00:09:21.797 Atomic Write Unit (Normal): 1 00:09:21.797 Atomic Write Unit (PFail): 1 00:09:21.797 Atomic Compare & Write Unit: 1 00:09:21.797 Fused Compare & Write: Not Supported 00:09:21.797 Scatter-Gather List 00:09:21.797 SGL Command Set: Supported 00:09:21.797 SGL Keyed: Not Supported 00:09:21.797 SGL Bit Bucket Descriptor: Not Supported 00:09:21.797 SGL Metadata Pointer: Not Supported 00:09:21.797 Oversized SGL: Not Supported 00:09:21.797 SGL Metadata Address: Not Supported 00:09:21.797 SGL Offset: Not Supported 00:09:21.797 Transport SGL Data Block: Not Supported 00:09:21.797 Replay Protected Memory Block: Not Supported 00:09:21.797 00:09:21.797 Firmware Slot Information 00:09:21.797 ========================= 00:09:21.797 Active slot: 1 00:09:21.797 Slot 1 Firmware Revision: 1.0 00:09:21.797 00:09:21.797 00:09:21.797 Commands Supported and Effects 00:09:21.797 ============================== 00:09:21.797 Admin Commands 00:09:21.797 -------------- 00:09:21.797 Delete I/O Submission Queue (00h): Supported 00:09:21.797 Create I/O Submission Queue (01h): Supported 00:09:21.797 Get Log Page (02h): Supported 00:09:21.797 Delete I/O Completion Queue (04h): Supported 00:09:21.797 Create I/O Completion Queue (05h): Supported 00:09:21.797 Identify (06h): Supported 00:09:21.797 Abort (08h): Supported 00:09:21.797 Set Features (09h): Supported 00:09:21.797 Get Features (0Ah): Supported 00:09:21.797 Asynchronous Event Request (0Ch): Supported 00:09:21.797 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.797 Directive Send (19h): Supported 00:09:21.797 Directive Receive (1Ah): Supported 00:09:21.797 Virtualization Management (1Ch): Supported 00:09:21.797 Doorbell Buffer Config (7Ch): Supported 00:09:21.797 Format NVM (80h): Supported LBA-Change 00:09:21.797 I/O Commands 00:09:21.797 ------------ 00:09:21.797 Flush (00h): Supported LBA-Change 00:09:21.797 Write (01h): Supported LBA-Change 00:09:21.797 Read (02h): Supported 00:09:21.797 Compare (05h): Supported 00:09:21.797 Write Zeroes (08h): Supported LBA-Change 00:09:21.797 Dataset Management (09h): Supported LBA-Change 00:09:21.797 Unknown (0Ch): Supported 00:09:21.797 [2024-11-18 14:55:45.330464] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 75162 terminated unexpected 00:09:21.797 Unknown (12h): Supported 00:09:21.797 Copy (19h): Supported LBA-Change 00:09:21.797 Unknown (1Dh): Supported LBA-Change 00:09:21.797 00:09:21.797 Error Log 00:09:21.797 ========= 00:09:21.797 00:09:21.797 Arbitration 00:09:21.797 =========== 00:09:21.797 Arbitration Burst: no limit 00:09:21.797 00:09:21.797 Power Management 00:09:21.797 ================ 00:09:21.797 Number of Power States: 1 00:09:21.797 Current Power State: Power State #0 00:09:21.797 Power State #0: 00:09:21.797 Max Power: 25.00 W 00:09:21.797 Non-Operational State: Operational 00:09:21.797 Entry Latency: 16 microseconds 00:09:21.797 Exit Latency: 4 microseconds 00:09:21.797 Relative Read Throughput: 0 00:09:21.797 Relative Read Latency: 0 00:09:21.797 Relative Write Throughput: 0 00:09:21.797 Relative Write Latency: 0 00:09:21.797 Idle Power: Not Reported 00:09:21.797 Active Power: Not Reported 00:09:21.797 Non-Operational Permissive Mode: Not Supported 00:09:21.797 00:09:21.797 Health Information 00:09:21.797 ================== 00:09:21.797 Critical Warnings: 00:09:21.797 Available Spare Space: OK 00:09:21.797 Temperature: OK 00:09:21.797 Device Reliability: OK 00:09:21.797 Read Only: No 00:09:21.797 Volatile Memory Backup: OK 00:09:21.797 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.797 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.797 Available Spare: 0% 00:09:21.797 Available Spare Threshold: 0% 00:09:21.797 Life Percentage Used: 0% 00:09:21.797 Data Units Read: 1819 00:09:21.797 Data Units Written: 841 00:09:21.797 Host Read Commands: 90197 00:09:21.797 Host Write Commands: 44911 00:09:21.797 Controller Busy Time: 0 minutes 00:09:21.797 Power Cycles: 0 00:09:21.797 Power On Hours: 0 hours 00:09:21.797 Unsafe Shutdowns: 0 00:09:21.797 Unrecoverable Media Errors: 0 00:09:21.797 Lifetime Error Log Entries: 0 00:09:21.797 Warning Temperature Time: 0 minutes 00:09:21.797 Critical Temperature Time: 0 minutes 00:09:21.797 00:09:21.797 Number of Queues 00:09:21.797 ================ 00:09:21.797 Number of I/O Submission Queues: 64 00:09:21.797 Number of I/O Completion Queues: 64 00:09:21.797 00:09:21.797 ZNS Specific Controller Data 00:09:21.797 ============================ 00:09:21.797 Zone Append Size Limit: 0 00:09:21.797 00:09:21.797 00:09:21.797 Active Namespaces 00:09:21.797 ================= 00:09:21.797 Namespace ID:1 00:09:21.797 Error Recovery Timeout: Unlimited 00:09:21.797 Command Set Identifier: NVM (00h) 00:09:21.797 Deallocate: Supported 00:09:21.797 Deallocated/Unwritten Error: Supported 00:09:21.797 Deallocated Read Value: All 0x00 00:09:21.797 Deallocate in Write Zeroes: Not Supported 00:09:21.797 Deallocated Guard Field: 0xFFFF 00:09:21.797 Flush: Supported 00:09:21.797 Reservation: Not Supported 00:09:21.797 Metadata Transferred as: Separate Metadata Buffer 00:09:21.797 Namespace Sharing Capabilities: Private 00:09:21.797 Size (in LBAs): 1548666 (5GiB) 00:09:21.797 Capacity (in LBAs): 1548666 (5GiB) 00:09:21.797 Utilization (in LBAs): 1548666 (5GiB) 00:09:21.797 Thin Provisioning: Not Supported 00:09:21.797 Per-NS Atomic Units: No 00:09:21.797 Maximum Single Source Range Length: 128 00:09:21.797 Maximum Copy Length: 128 00:09:21.797 Maximum Source Range Count: 128 00:09:21.797 NGUID/EUI64 Never Reused: No 00:09:21.797 Namespace Write Protected: No 00:09:21.797 Number of LBA Formats: 8 00:09:21.797 Current LBA Format: LBA Format #07 00:09:21.797 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.797 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.797 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.797 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.797 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.797 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.797 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.797 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.797 00:09:21.797 ===================================================== 00:09:21.797 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:21.797 ===================================================== 00:09:21.797 Controller Capabilities/Features 00:09:21.797 ================================ 00:09:21.797 Vendor ID: 1b36 00:09:21.797 Subsystem Vendor ID: 1af4 00:09:21.797 Serial Number: 12341 00:09:21.797 Model Number: QEMU NVMe Ctrl 00:09:21.797 Firmware Version: 8.0.0 00:09:21.797 Recommended Arb Burst: 6 00:09:21.797 IEEE OUI Identifier: 00 54 52 00:09:21.797 Multi-path I/O 00:09:21.797 May have multiple subsystem ports: No 00:09:21.797 May have multiple controllers: No 00:09:21.797 Associated with SR-IOV VF: No 00:09:21.797 Max Data Transfer Size: 524288 00:09:21.797 Max Number of Namespaces: 256 00:09:21.797 Max Number of I/O Queues: 64 00:09:21.797 NVMe Specification Version (VS): 1.4 00:09:21.797 NVMe Specification Version (Identify): 1.4 00:09:21.797 Maximum Queue Entries: 2048 00:09:21.797 Contiguous Queues Required: Yes 00:09:21.797 Arbitration Mechanisms Supported 00:09:21.797 Weighted Round Robin: Not Supported 00:09:21.797 Vendor Specific: Not Supported 00:09:21.797 Reset Timeout: 7500 ms 00:09:21.797 Doorbell Stride: 4 bytes 00:09:21.797 NVM Subsystem Reset: Not Supported 00:09:21.797 Command Sets Supported 00:09:21.797 NVM Command Set: Supported 00:09:21.797 Boot Partition: Not Supported 00:09:21.797 Memory Page Size Minimum: 4096 bytes 00:09:21.797 Memory Page Size Maximum: 65536 bytes 00:09:21.797 Persistent Memory Region: Not Supported 00:09:21.798 Optional Asynchronous Events Supported 00:09:21.798 Namespace Attribute Notices: Supported 00:09:21.798 Firmware Activation Notices: Not Supported 00:09:21.798 ANA Change Notices: Not Supported 00:09:21.798 PLE Aggregate Log Change Notices: Not Supported 00:09:21.798 LBA Status Info Alert Notices: Not Supported 00:09:21.798 EGE Aggregate Log Change Notices: Not Supported 00:09:21.798 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.798 Zone Descriptor Change Notices: Not Supported 00:09:21.798 Discovery Log Change Notices: Not Supported 00:09:21.798 Controller Attributes 00:09:21.798 128-bit Host Identifier: Not Supported 00:09:21.798 Non-Operational Permissive Mode: Not Supported 00:09:21.798 NVM Sets: Not Supported 00:09:21.798 Read Recovery Levels: Not Supported 00:09:21.798 Endurance Groups: Not Supported 00:09:21.798 Predictable Latency Mode: Not Supported 00:09:21.798 Traffic Based Keep ALive: Not Supported 00:09:21.798 Namespace Granularity: Not Supported 00:09:21.798 SQ Associations: Not Supported 00:09:21.798 UUID List: Not Supported 00:09:21.798 Multi-Domain Subsystem: Not Supported 00:09:21.798 Fixed Capacity Management: Not Supported 00:09:21.798 Variable Capacity Management: Not Supported 00:09:21.798 Delete Endurance Group: Not Supported 00:09:21.798 Delete NVM Set: Not Supported 00:09:21.798 Extended LBA Formats Supported: Supported 00:09:21.798 Flexible Data Placement Supported: Not Supported 00:09:21.798 00:09:21.798 Controller Memory Buffer Support 00:09:21.798 ================================ 00:09:21.798 Supported: No 00:09:21.798 00:09:21.798 Persistent Memory Region Support 00:09:21.798 ================================ 00:09:21.798 Supported: No 00:09:21.798 00:09:21.798 Admin Command Set Attributes 00:09:21.798 ============================ 00:09:21.798 Security Send/Receive: Not Supported 00:09:21.798 Format NVM: Supported 00:09:21.798 Firmware Activate/Download: Not Supported 00:09:21.798 Namespace Management: Supported 00:09:21.798 Device Self-Test: Not Supported 00:09:21.798 Directives: Supported 00:09:21.798 NVMe-MI: Not Supported 00:09:21.798 Virtualization Management: Not Supported 00:09:21.798 Doorbell Buffer Config: Supported 00:09:21.798 Get LBA Status Capability: Not Supported 00:09:21.798 Command & Feature Lockdown Capability: Not Supported 00:09:21.798 Abort Command Limit: 4 00:09:21.798 Async Event Request Limit: 4 00:09:21.798 Number of Firmware Slots: N/A 00:09:21.798 Firmware Slot 1 Read-Only: N/A 00:09:21.798 Firmware Activation Without Reset: N/A 00:09:21.798 Multiple Update Detection Support: N/A 00:09:21.798 Firmware Update Granularity: No Information Provided 00:09:21.798 Per-Namespace SMART Log: Yes 00:09:21.798 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.798 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:21.798 Command Effects Log Page: Supported 00:09:21.798 Get Log Page Extended Data: Supported 00:09:21.798 Telemetry Log Pages: Not Supported 00:09:21.798 Persistent Event Log Pages: Not Supported 00:09:21.798 Supported Log Pages Log Page: May Support 00:09:21.798 Commands Supported & Effects Log Page: Not Supported 00:09:21.798 Feature Identifiers & Effects Log Page:May Support 00:09:21.798 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.798 Data Area 4 for Telemetry Log: Not Supported 00:09:21.798 Error Log Page Entries Supported: 1 00:09:21.798 Keep Alive: Not Supported 00:09:21.798 00:09:21.798 NVM Command Set Attributes 00:09:21.798 ========================== 00:09:21.798 Submission Queue Entry Size 00:09:21.798 Max: 64 00:09:21.798 Min: 64 00:09:21.798 Completion Queue Entry Size 00:09:21.798 Max: 16 00:09:21.798 Min: 16 00:09:21.798 Number of Namespaces: 256 00:09:21.798 Compare Command: Supported 00:09:21.798 Write Uncorrectable Command: Not Supported 00:09:21.798 Dataset Management Command: Supported 00:09:21.798 Write Zeroes Command: Supported 00:09:21.798 Set Features Save Field: Supported 00:09:21.798 Reservations: Not Supported 00:09:21.798 Timestamp: Supported 00:09:21.798 Copy: Supported 00:09:21.798 Volatile Write Cache: Present 00:09:21.798 Atomic Write Unit (Normal): 1 00:09:21.798 Atomic Write Unit (PFail): 1 00:09:21.798 Atomic Compare & Write Unit: 1 00:09:21.798 Fused Compare & Write: Not Supported 00:09:21.798 Scatter-Gather List 00:09:21.798 SGL Command Set: Supported 00:09:21.798 SGL Keyed: Not Supported 00:09:21.798 SGL Bit Bucket Descriptor: Not Supported 00:09:21.798 SGL Metadata Pointer: Not Supported 00:09:21.798 Oversized SGL: Not Supported 00:09:21.798 SGL Metadata Address: Not Supported 00:09:21.798 SGL Offset: Not Supported 00:09:21.798 Transport SGL Data Block: Not Supported 00:09:21.798 Replay Protected Memory Block: Not Supported 00:09:21.798 00:09:21.798 Firmware Slot Information 00:09:21.798 ========================= 00:09:21.798 Active slot: 1 00:09:21.798 Slot 1 Firmware Revision: 1.0 00:09:21.798 00:09:21.798 00:09:21.798 Commands Supported and Effects 00:09:21.798 ============================== 00:09:21.798 Admin Commands 00:09:21.798 -------------- 00:09:21.798 Delete I/O Submission Queue (00h): Supported 00:09:21.798 Create I/O Submission Queue (01h): Supported 00:09:21.798 Get Log Page (02h): Supported 00:09:21.798 Delete I/O Completion Queue (04h): Supported 00:09:21.798 Create I/O Completion Queue (05h): Supported 00:09:21.798 Identify (06h): Supported 00:09:21.798 Abort (08h): Supported 00:09:21.798 Set Features (09h): Supported 00:09:21.798 Get Features (0Ah): Supported 00:09:21.798 Asynchronous Event Request (0Ch): Supported 00:09:21.798 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.798 Directive Send (19h): Supported 00:09:21.798 Directive Receive (1Ah): Supported 00:09:21.798 Virtualization Management (1Ch): Supported 00:09:21.798 Doorbell Buffer Config (7Ch): Supported 00:09:21.798 Format NVM (80h): Supported LBA-Change 00:09:21.798 I/O Commands 00:09:21.798 ------------ 00:09:21.798 Flush (00h): Supported LBA-Change 00:09:21.798 Write (01h): Supported LBA-Change 00:09:21.798 Read (02h): Supported 00:09:21.798 Compare (05h): Supported 00:09:21.798 Write Zeroes (08h): Supported LBA-Change 00:09:21.798 Dataset Management (09h): Supported LBA-Change 00:09:21.798 Unknown (0Ch): Supported 00:09:21.798 Unknown (12h): Supported 00:09:21.798 Copy (19h): Supported LBA-Change 00:09:21.798 Unknown (1Dh): Supported LBA-Change 00:09:21.798 00:09:21.798 Error Log 00:09:21.798 ========= 00:09:21.798 00:09:21.798 Arbitration 00:09:21.798 =========== 00:09:21.798 Arbitration Burst: no limit 00:09:21.798 00:09:21.798 Power Management 00:09:21.798 ================ 00:09:21.798 Number of Power States: 1 00:09:21.798 Current Power State: Power State #0 00:09:21.798 Power State #0: 00:09:21.798 Max Power: 25.00 W 00:09:21.798 Non-Operational State: Operational 00:09:21.798 Entry Latency: 16 microseconds 00:09:21.798 Exit Latency: 4 microseconds 00:09:21.798 Relative Read Throughput: 0 00:09:21.798 Relative Read Latency: 0 00:09:21.798 Relative Write Throughput: 0 00:09:21.798 Relative Write Latency: 0 00:09:21.798 Idle Power: Not Reported 00:09:21.798 Active Power: Not Reported 00:09:21.798 Non-Operational Permissive Mode: Not Supported 00:09:21.798 00:09:21.798 Health Information 00:09:21.798 ================== 00:09:21.798 Critical Warnings: 00:09:21.798 Available Spare Space: OK 00:09:21.798 Temperature: OK 00:09:21.798 Device Reliability: OK 00:09:21.798 Read Only: No 00:09:21.798 Volatile Memory Backup: OK 00:09:21.798 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.798 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.798 Available Spare: 0% 00:09:21.798 Available Spare Threshold: 0% 00:09:21.798 Life Percentage Used: 0% 00:09:21.798 Data Units Read: 1227 00:09:21.798 Data Units Written: [2024-11-18 14:55:45.331635] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 75162 terminated unexpected 00:09:21.798 569 00:09:21.798 Host Read Commands: 61387 00:09:21.798 Host Write Commands: 30254 00:09:21.798 Controller Busy Time: 0 minutes 00:09:21.798 Power Cycles: 0 00:09:21.798 Power On Hours: 0 hours 00:09:21.798 Unsafe Shutdowns: 0 00:09:21.798 Unrecoverable Media Errors: 0 00:09:21.798 Lifetime Error Log Entries: 0 00:09:21.798 Warning Temperature Time: 0 minutes 00:09:21.798 Critical Temperature Time: 0 minutes 00:09:21.798 00:09:21.798 Number of Queues 00:09:21.798 ================ 00:09:21.798 Number of I/O Submission Queues: 64 00:09:21.798 Number of I/O Completion Queues: 64 00:09:21.798 00:09:21.798 ZNS Specific Controller Data 00:09:21.798 ============================ 00:09:21.798 Zone Append Size Limit: 0 00:09:21.798 00:09:21.798 00:09:21.798 Active Namespaces 00:09:21.799 ================= 00:09:21.799 Namespace ID:1 00:09:21.799 Error Recovery Timeout: Unlimited 00:09:21.799 Command Set Identifier: NVM (00h) 00:09:21.799 Deallocate: Supported 00:09:21.799 Deallocated/Unwritten Error: Supported 00:09:21.799 Deallocated Read Value: All 0x00 00:09:21.799 Deallocate in Write Zeroes: Not Supported 00:09:21.799 Deallocated Guard Field: 0xFFFF 00:09:21.799 Flush: Supported 00:09:21.799 Reservation: Not Supported 00:09:21.799 Namespace Sharing Capabilities: Private 00:09:21.799 Size (in LBAs): 1310720 (5GiB) 00:09:21.799 Capacity (in LBAs): 1310720 (5GiB) 00:09:21.799 Utilization (in LBAs): 1310720 (5GiB) 00:09:21.799 Thin Provisioning: Not Supported 00:09:21.799 Per-NS Atomic Units: No 00:09:21.799 Maximum Single Source Range Length: 128 00:09:21.799 Maximum Copy Length: 128 00:09:21.799 Maximum Source Range Count: 128 00:09:21.799 NGUID/EUI64 Never Reused: No 00:09:21.799 Namespace Write Protected: No 00:09:21.799 Number of LBA Formats: 8 00:09:21.799 Current LBA Format: LBA Format #04 00:09:21.799 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.799 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.799 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.799 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.799 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.799 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.799 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.799 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.799 00:09:21.799 ===================================================== 00:09:21.799 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:21.799 ===================================================== 00:09:21.799 Controller Capabilities/Features 00:09:21.799 ================================ 00:09:21.799 Vendor ID: 1b36 00:09:21.799 Subsystem Vendor ID: 1af4 00:09:21.799 Serial Number: 12342 00:09:21.799 Model Number: QEMU NVMe Ctrl 00:09:21.799 Firmware Version: 8.0.0 00:09:21.799 Recommended Arb Burst: 6 00:09:21.799 IEEE OUI Identifier: 00 54 52 00:09:21.799 Multi-path I/O 00:09:21.799 May have multiple subsystem ports: No 00:09:21.799 May have multiple controllers: No 00:09:21.799 Associated with SR-IOV VF: No 00:09:21.799 Max Data Transfer Size: 524288 00:09:21.799 Max Number of Namespaces: 256 00:09:21.799 Max Number of I/O Queues: 64 00:09:21.799 NVMe Specification Version (VS): 1.4 00:09:21.799 NVMe Specification Version (Identify): 1.4 00:09:21.799 Maximum Queue Entries: 2048 00:09:21.799 Contiguous Queues Required: Yes 00:09:21.799 Arbitration Mechanisms Supported 00:09:21.799 Weighted Round Robin: Not Supported 00:09:21.799 Vendor Specific: Not Supported 00:09:21.799 Reset Timeout: 7500 ms 00:09:21.799 Doorbell Stride: 4 bytes 00:09:21.799 NVM Subsystem Reset: Not Supported 00:09:21.799 Command Sets Supported 00:09:21.799 NVM Command Set: Supported 00:09:21.799 Boot Partition: Not Supported 00:09:21.799 Memory Page Size Minimum: 4096 bytes 00:09:21.799 Memory Page Size Maximum: 65536 bytes 00:09:21.799 Persistent Memory Region: Not Supported 00:09:21.799 Optional Asynchronous Events Supported 00:09:21.799 Namespace Attribute Notices: Supported 00:09:21.799 Firmware Activation Notices: Not Supported 00:09:21.799 ANA Change Notices: Not Supported 00:09:21.799 PLE Aggregate Log Change Notices: Not Supported 00:09:21.799 LBA Status Info Alert Notices: Not Supported 00:09:21.799 EGE Aggregate Log Change Notices: Not Supported 00:09:21.799 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.799 Zone Descriptor Change Notices: Not Supported 00:09:21.799 Discovery Log Change Notices: Not Supported 00:09:21.799 Controller Attributes 00:09:21.799 128-bit Host Identifier: Not Supported 00:09:21.799 Non-Operational Permissive Mode: Not Supported 00:09:21.799 NVM Sets: Not Supported 00:09:21.799 Read Recovery Levels: Not Supported 00:09:21.799 Endurance Groups: Not Supported 00:09:21.799 Predictable Latency Mode: Not Supported 00:09:21.799 Traffic Based Keep ALive: Not Supported 00:09:21.799 Namespace Granularity: Not Supported 00:09:21.799 SQ Associations: Not Supported 00:09:21.799 UUID List: Not Supported 00:09:21.799 Multi-Domain Subsystem: Not Supported 00:09:21.799 Fixed Capacity Management: Not Supported 00:09:21.799 Variable Capacity Management: Not Supported 00:09:21.799 Delete Endurance Group: Not Supported 00:09:21.799 Delete NVM Set: Not Supported 00:09:21.799 Extended LBA Formats Supported: Supported 00:09:21.799 Flexible Data Placement Supported: Not Supported 00:09:21.799 00:09:21.799 Controller Memory Buffer Support 00:09:21.799 ================================ 00:09:21.799 Supported: No 00:09:21.799 00:09:21.799 Persistent Memory Region Support 00:09:21.799 ================================ 00:09:21.799 Supported: No 00:09:21.799 00:09:21.799 Admin Command Set Attributes 00:09:21.799 ============================ 00:09:21.799 Security Send/Receive: Not Supported 00:09:21.799 Format NVM: Supported 00:09:21.799 Firmware Activate/Download: Not Supported 00:09:21.799 Namespace Management: Supported 00:09:21.799 Device Self-Test: Not Supported 00:09:21.799 Directives: Supported 00:09:21.799 NVMe-MI: Not Supported 00:09:21.799 Virtualization Management: Not Supported 00:09:21.799 Doorbell Buffer Config: Supported 00:09:21.799 Get LBA Status Capability: Not Supported 00:09:21.799 Command & Feature Lockdown Capability: Not Supported 00:09:21.799 Abort Command Limit: 4 00:09:21.799 Async Event Request Limit: 4 00:09:21.799 Number of Firmware Slots: N/A 00:09:21.799 Firmware Slot 1 Read-Only: N/A 00:09:21.799 Firmware Activation Without Reset: N/A 00:09:21.799 Multiple Update Detection Support: N/A 00:09:21.799 Firmware Update Granularity: No Information Provided 00:09:21.799 Per-Namespace SMART Log: Yes 00:09:21.799 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.799 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:21.799 Command Effects Log Page: Supported 00:09:21.799 Get Log Page Extended Data: Supported 00:09:21.799 Telemetry Log Pages: Not Supported 00:09:21.799 Persistent Event Log Pages: Not Supported 00:09:21.799 Supported Log Pages Log Page: May Support 00:09:21.799 Commands Supported & Effects Log Page: Not Supported 00:09:21.799 Feature Identifiers & Effects Log Page:May Support 00:09:21.799 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.799 Data Area 4 for Telemetry Log: Not Supported 00:09:21.799 Error Log Page Entries Supported: 1 00:09:21.799 Keep Alive: Not Supported 00:09:21.799 00:09:21.799 NVM Command Set Attributes 00:09:21.799 ========================== 00:09:21.799 Submission Queue Entry Size 00:09:21.799 Max: 64 00:09:21.799 Min: 64 00:09:21.799 Completion Queue Entry Size 00:09:21.799 Max: 16 00:09:21.799 Min: 16 00:09:21.799 Number of Namespaces: 256 00:09:21.799 Compare Command: Supported 00:09:21.799 Write Uncorrectable Command: Not Supported 00:09:21.799 Dataset Management Command: Supported 00:09:21.799 Write Zeroes Command: Supported 00:09:21.799 Set Features Save Field: Supported 00:09:21.799 Reservations: Not Supported 00:09:21.799 Timestamp: Supported 00:09:21.799 Copy: Supported 00:09:21.799 Volatile Write Cache: Present 00:09:21.800 Atomic Write Unit (Normal): 1 00:09:21.800 Atomic Write Unit (PFail): 1 00:09:21.800 Atomic Compare & Write Unit: 1 00:09:21.800 Fused Compare & Write: Not Supported 00:09:21.800 Scatter-Gather List 00:09:21.800 SGL Command Set: Supported 00:09:21.800 SGL Keyed: Not Supported 00:09:21.800 SGL Bit Bucket Descriptor: Not Supported 00:09:21.800 SGL Metadata Pointer: Not Supported 00:09:21.800 Oversized SGL: Not Supported 00:09:21.800 SGL Metadata Address: Not Supported 00:09:21.800 SGL Offset: Not Supported 00:09:21.800 Transport SGL Data Block: Not Supported 00:09:21.800 Replay Protected Memory Block: Not Supported 00:09:21.800 00:09:21.800 Firmware Slot Information 00:09:21.800 ========================= 00:09:21.800 Active slot: 1 00:09:21.800 Slot 1 Firmware Revision: 1.0 00:09:21.800 00:09:21.800 00:09:21.800 Commands Supported and Effects 00:09:21.800 ============================== 00:09:21.800 Admin Commands 00:09:21.800 -------------- 00:09:21.800 Delete I/O Submission Queue (00h): Supported 00:09:21.800 Create I/O Submission Queue (01h): Supported 00:09:21.800 Get Log Page (02h): Supported 00:09:21.800 Delete I/O Completion Queue (04h): Supported 00:09:21.800 Create I/O Completion Queue (05h): Supported 00:09:21.800 Identify (06h): Supported 00:09:21.800 Abort (08h): Supported 00:09:21.800 Set Features (09h): Supported 00:09:21.800 Get Features (0Ah): Supported 00:09:21.800 Asynchronous Event Request (0Ch): Supported 00:09:21.800 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.800 Directive Send (19h): Supported 00:09:21.800 Directive Receive (1Ah): Supported 00:09:21.800 Virtualization Management (1Ch): Supported 00:09:21.800 Doorbell Buffer Config (7Ch): Supported 00:09:21.800 Format NVM (80h): Supported LBA-Change 00:09:21.800 I/O Commands 00:09:21.800 ------------ 00:09:21.800 Flush (00h): Supported LBA-Change 00:09:21.800 Write (01h): Supported LBA-Change 00:09:21.800 Read (02h): Supported 00:09:21.800 Compare (05h): Supported 00:09:21.800 Write Zeroes (08h): Supported LBA-Change 00:09:21.800 Dataset Management (09h): Supported LBA-Change 00:09:21.800 Unknown (0Ch): Supported 00:09:21.800 Unknown (12h): Supported 00:09:21.800 Copy (19h): Supported LBA-Change 00:09:21.800 Unknown (1Dh): Supported LBA-Change 00:09:21.800 00:09:21.800 Error Log 00:09:21.800 ========= 00:09:21.800 00:09:21.800 Arbitration 00:09:21.800 =========== 00:09:21.800 Arbitration Burst: no limit 00:09:21.800 00:09:21.800 Power Management 00:09:21.800 ================ 00:09:21.800 Number of Power States: 1 00:09:21.800 Current Power State: Power State #0 00:09:21.800 Power State #0: 00:09:21.800 Max Power: 25.00 W 00:09:21.800 Non-Operational State: Operational 00:09:21.800 Entry Latency: 16 microseconds 00:09:21.800 Exit Latency: 4 microseconds 00:09:21.800 Relative Read Throughput: 0 00:09:21.800 Relative Read Latency: 0 00:09:21.800 Relative Write Throughput: 0 00:09:21.800 Relative Write Latency: 0 00:09:21.800 Idle Power: Not Reported 00:09:21.800 Active Power: Not Reported 00:09:21.800 Non-Operational Permissive Mode: Not Supported 00:09:21.800 00:09:21.800 Health Information 00:09:21.800 ================== 00:09:21.800 Critical Warnings: 00:09:21.800 Available Spare Space: OK 00:09:21.800 Temperature: OK 00:09:21.800 Device Reliability: OK 00:09:21.800 Read Only: No 00:09:21.800 Volatile Memory Backup: OK 00:09:21.800 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.800 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.800 Available Spare: 0% 00:09:21.800 Available Spare Threshold: 0% 00:09:21.800 Life Percentage Used: 0% 00:09:21.800 Data Units Read: 3816 00:09:21.800 Data Units Written: 1763 00:09:21.800 Host Read Commands: 185828 00:09:21.800 Host Write Commands: 91387 00:09:21.800 Controller Busy Time: 0 minutes 00:09:21.800 Power Cycles: 0 00:09:21.800 Power On Hours: 0 hours 00:09:21.800 Unsafe Shutdowns: 0 00:09:21.800 Unrecoverable Media Errors: 0 00:09:21.800 Lifetime Error Log Entries: 0 00:09:21.800 Warning Temperature Time: 0 minutes 00:09:21.800 Critical Temperature Time: 0 minutes 00:09:21.800 00:09:21.800 Number of Queues 00:09:21.800 ================ 00:09:21.800 Number of I/O Submission Queues: 64 00:09:21.800 Number of I/O Completion Queues: 64 00:09:21.800 00:09:21.800 ZNS Specific Controller Data 00:09:21.800 ============================ 00:09:21.800 Zone Append Size Limit: 0 00:09:21.800 00:09:21.800 00:09:21.800 Active Namespaces 00:09:21.800 ================= 00:09:21.800 Namespace ID:1 00:09:21.800 Error Recovery Timeout: Unlimited 00:09:21.800 Command Set Identifier: NVM (00h) 00:09:21.800 Deallocate: Supported 00:09:21.800 Deallocated/Unwritten Error: Supported 00:09:21.800 Deallocated Read Value: All 0x00 00:09:21.800 Deallocate in Write Zeroes: Not Supported 00:09:21.800 Deallocated Guard Field: 0xFFFF 00:09:21.800 Flush: Supported 00:09:21.800 Reservation: Not Supported 00:09:21.800 Namespace Sharing Capabilities: Private 00:09:21.800 Size (in LBAs): 1048576 (4GiB) 00:09:21.800 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.800 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.800 Thin Provisioning: Not Supported 00:09:21.800 Per-NS Atomic Units: No 00:09:21.800 Maximum Single Source Range Length: 128 00:09:21.800 Maximum Copy Length: 128 00:09:21.800 Maximum Source Range Count: 128 00:09:21.800 NGUID/EUI64 Never Reused: No 00:09:21.800 Namespace Write Protected: No 00:09:21.800 Number of LBA Formats: 8 00:09:21.800 Current LBA Format: LBA Format #04 00:09:21.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.800 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.800 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.800 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.800 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.800 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.800 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.800 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.800 00:09:21.800 Namespace ID:2 00:09:21.800 Error Recovery Timeout: Unlimited 00:09:21.800 Command Set Identifier: NVM (00h) 00:09:21.800 Deallocate: Supported 00:09:21.800 Deallocated/Unwritten Error: Supported 00:09:21.800 Deallocated Read Value: All 0x00 00:09:21.800 Deallocate in Write Zeroes: Not Supported 00:09:21.800 Deallocated Guard Field: 0xFFFF 00:09:21.800 Flush: Supported 00:09:21.800 Reservation: Not Supported 00:09:21.800 Namespace Sharing Capabilities: Private 00:09:21.800 Size (in LBAs): 1048576 (4GiB) 00:09:21.800 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.800 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.800 Thin Provisioning: Not Supported 00:09:21.800 Per-NS Atomic Units: No 00:09:21.800 Maximum Single Source Range Length: 128 00:09:21.800 Maximum Copy Length: 128 00:09:21.800 Maximum Source Range Count: 128 00:09:21.800 NGUID/EUI64 Never Reused: No 00:09:21.800 Namespace Write Protected: No 00:09:21.800 Number of LBA Formats: 8 00:09:21.800 Current LBA Format: LBA Format #04 00:09:21.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.800 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.800 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.800 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.800 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.800 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.800 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.800 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.800 00:09:21.800 Namespace ID:3 00:09:21.800 Error Recovery Timeout: Unlimited 00:09:21.800 Command Set Identifier: NVM (00h) 00:09:21.800 Deallocate: Supported 00:09:21.800 Deallocated/Unwritten Error: Supported 00:09:21.800 Deallocated Read Value: All 0x00 00:09:21.800 Deallocate in Write Zeroes: Not Supported 00:09:21.800 Deallocated Guard Field: 0xFFFF 00:09:21.800 Flush: Supported 00:09:21.800 Reservation: Not Supported 00:09:21.800 Namespace Sharing Capabilities: Private 00:09:21.800 Size (in LBAs): 1048576 (4GiB) 00:09:21.800 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.800 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.800 Thin Provisioning: Not Supported 00:09:21.800 Per-NS Atomic Units: No 00:09:21.800 Maximum Single Source Range Length: 128 00:09:21.800 Maximum Copy Length: 128 00:09:21.800 Maximum Source Range Count: 128 00:09:21.800 NGUID/EUI64 Never Reused: No 00:09:21.800 Namespace Write Protected: No 00:09:21.800 Number of LBA Formats: 8 00:09:21.800 Current LBA Format: LBA Format #04 00:09:21.801 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.801 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.801 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.801 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.801 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.801 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.801 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.801 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.801 00:09:21.801 14:55:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.801 14:55:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:22.060 ===================================================== 00:09:22.060 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:22.060 ===================================================== 00:09:22.060 Controller Capabilities/Features 00:09:22.060 ================================ 00:09:22.060 Vendor ID: 1b36 00:09:22.060 Subsystem Vendor ID: 1af4 00:09:22.060 Serial Number: 12340 00:09:22.060 Model Number: QEMU NVMe Ctrl 00:09:22.060 Firmware Version: 8.0.0 00:09:22.060 Recommended Arb Burst: 6 00:09:22.060 IEEE OUI Identifier: 00 54 52 00:09:22.060 Multi-path I/O 00:09:22.060 May have multiple subsystem ports: No 00:09:22.060 May have multiple controllers: No 00:09:22.060 Associated with SR-IOV VF: No 00:09:22.060 Max Data Transfer Size: 524288 00:09:22.060 Max Number of Namespaces: 256 00:09:22.060 Max Number of I/O Queues: 64 00:09:22.060 NVMe Specification Version (VS): 1.4 00:09:22.060 NVMe Specification Version (Identify): 1.4 00:09:22.060 Maximum Queue Entries: 2048 00:09:22.060 Contiguous Queues Required: Yes 00:09:22.060 Arbitration Mechanisms Supported 00:09:22.060 Weighted Round Robin: Not Supported 00:09:22.060 Vendor Specific: Not Supported 00:09:22.060 Reset Timeout: 7500 ms 00:09:22.060 Doorbell Stride: 4 bytes 00:09:22.060 NVM Subsystem Reset: Not Supported 00:09:22.060 Command Sets Supported 00:09:22.060 NVM Command Set: Supported 00:09:22.060 Boot Partition: Not Supported 00:09:22.060 Memory Page Size Minimum: 4096 bytes 00:09:22.060 Memory Page Size Maximum: 65536 bytes 00:09:22.060 Persistent Memory Region: Not Supported 00:09:22.060 Optional Asynchronous Events Supported 00:09:22.060 Namespace Attribute Notices: Supported 00:09:22.060 Firmware Activation Notices: Not Supported 00:09:22.060 ANA Change Notices: Not Supported 00:09:22.060 PLE Aggregate Log Change Notices: Not Supported 00:09:22.060 LBA Status Info Alert Notices: Not Supported 00:09:22.060 EGE Aggregate Log Change Notices: Not Supported 00:09:22.060 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.060 Zone Descriptor Change Notices: Not Supported 00:09:22.060 Discovery Log Change Notices: Not Supported 00:09:22.060 Controller Attributes 00:09:22.060 128-bit Host Identifier: Not Supported 00:09:22.060 Non-Operational Permissive Mode: Not Supported 00:09:22.060 NVM Sets: Not Supported 00:09:22.060 Read Recovery Levels: Not Supported 00:09:22.060 Endurance Groups: Not Supported 00:09:22.060 Predictable Latency Mode: Not Supported 00:09:22.060 Traffic Based Keep ALive: Not Supported 00:09:22.060 Namespace Granularity: Not Supported 00:09:22.060 SQ Associations: Not Supported 00:09:22.060 UUID List: Not Supported 00:09:22.060 Multi-Domain Subsystem: Not Supported 00:09:22.060 Fixed Capacity Management: Not Supported 00:09:22.060 Variable Capacity Management: Not Supported 00:09:22.060 Delete Endurance Group: Not Supported 00:09:22.060 Delete NVM Set: Not Supported 00:09:22.060 Extended LBA Formats Supported: Supported 00:09:22.060 Flexible Data Placement Supported: Not Supported 00:09:22.060 00:09:22.060 Controller Memory Buffer Support 00:09:22.060 ================================ 00:09:22.060 Supported: No 00:09:22.060 00:09:22.060 Persistent Memory Region Support 00:09:22.060 ================================ 00:09:22.060 Supported: No 00:09:22.060 00:09:22.060 Admin Command Set Attributes 00:09:22.060 ============================ 00:09:22.060 Security Send/Receive: Not Supported 00:09:22.060 Format NVM: Supported 00:09:22.060 Firmware Activate/Download: Not Supported 00:09:22.060 Namespace Management: Supported 00:09:22.060 Device Self-Test: Not Supported 00:09:22.060 Directives: Supported 00:09:22.060 NVMe-MI: Not Supported 00:09:22.060 Virtualization Management: Not Supported 00:09:22.060 Doorbell Buffer Config: Supported 00:09:22.060 Get LBA Status Capability: Not Supported 00:09:22.060 Command & Feature Lockdown Capability: Not Supported 00:09:22.060 Abort Command Limit: 4 00:09:22.060 Async Event Request Limit: 4 00:09:22.060 Number of Firmware Slots: N/A 00:09:22.060 Firmware Slot 1 Read-Only: N/A 00:09:22.060 Firmware Activation Without Reset: N/A 00:09:22.060 Multiple Update Detection Support: N/A 00:09:22.060 Firmware Update Granularity: No Information Provided 00:09:22.060 Per-Namespace SMART Log: Yes 00:09:22.060 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.060 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:22.060 Command Effects Log Page: Supported 00:09:22.060 Get Log Page Extended Data: Supported 00:09:22.060 Telemetry Log Pages: Not Supported 00:09:22.060 Persistent Event Log Pages: Not Supported 00:09:22.060 Supported Log Pages Log Page: May Support 00:09:22.060 Commands Supported & Effects Log Page: Not Supported 00:09:22.060 Feature Identifiers & Effects Log Page:May Support 00:09:22.060 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.060 Data Area 4 for Telemetry Log: Not Supported 00:09:22.060 Error Log Page Entries Supported: 1 00:09:22.060 Keep Alive: Not Supported 00:09:22.060 00:09:22.060 NVM Command Set Attributes 00:09:22.060 ========================== 00:09:22.060 Submission Queue Entry Size 00:09:22.060 Max: 64 00:09:22.060 Min: 64 00:09:22.060 Completion Queue Entry Size 00:09:22.060 Max: 16 00:09:22.060 Min: 16 00:09:22.060 Number of Namespaces: 256 00:09:22.060 Compare Command: Supported 00:09:22.060 Write Uncorrectable Command: Not Supported 00:09:22.060 Dataset Management Command: Supported 00:09:22.060 Write Zeroes Command: Supported 00:09:22.060 Set Features Save Field: Supported 00:09:22.060 Reservations: Not Supported 00:09:22.060 Timestamp: Supported 00:09:22.060 Copy: Supported 00:09:22.060 Volatile Write Cache: Present 00:09:22.060 Atomic Write Unit (Normal): 1 00:09:22.060 Atomic Write Unit (PFail): 1 00:09:22.060 Atomic Compare & Write Unit: 1 00:09:22.060 Fused Compare & Write: Not Supported 00:09:22.060 Scatter-Gather List 00:09:22.060 SGL Command Set: Supported 00:09:22.060 SGL Keyed: Not Supported 00:09:22.060 SGL Bit Bucket Descriptor: Not Supported 00:09:22.060 SGL Metadata Pointer: Not Supported 00:09:22.060 Oversized SGL: Not Supported 00:09:22.060 SGL Metadata Address: Not Supported 00:09:22.060 SGL Offset: Not Supported 00:09:22.060 Transport SGL Data Block: Not Supported 00:09:22.060 Replay Protected Memory Block: Not Supported 00:09:22.060 00:09:22.060 Firmware Slot Information 00:09:22.060 ========================= 00:09:22.060 Active slot: 1 00:09:22.060 Slot 1 Firmware Revision: 1.0 00:09:22.060 00:09:22.060 00:09:22.060 Commands Supported and Effects 00:09:22.060 ============================== 00:09:22.060 Admin Commands 00:09:22.060 -------------- 00:09:22.060 Delete I/O Submission Queue (00h): Supported 00:09:22.060 Create I/O Submission Queue (01h): Supported 00:09:22.060 Get Log Page (02h): Supported 00:09:22.060 Delete I/O Completion Queue (04h): Supported 00:09:22.060 Create I/O Completion Queue (05h): Supported 00:09:22.060 Identify (06h): Supported 00:09:22.060 Abort (08h): Supported 00:09:22.060 Set Features (09h): Supported 00:09:22.060 Get Features (0Ah): Supported 00:09:22.060 Asynchronous Event Request (0Ch): Supported 00:09:22.060 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.060 Directive Send (19h): Supported 00:09:22.060 Directive Receive (1Ah): Supported 00:09:22.060 Virtualization Management (1Ch): Supported 00:09:22.060 Doorbell Buffer Config (7Ch): Supported 00:09:22.060 Format NVM (80h): Supported LBA-Change 00:09:22.060 I/O Commands 00:09:22.060 ------------ 00:09:22.060 Flush (00h): Supported LBA-Change 00:09:22.060 Write (01h): Supported LBA-Change 00:09:22.060 Read (02h): Supported 00:09:22.061 Compare (05h): Supported 00:09:22.061 Write Zeroes (08h): Supported LBA-Change 00:09:22.061 Dataset Management (09h): Supported LBA-Change 00:09:22.061 Unknown (0Ch): Supported 00:09:22.061 Unknown (12h): Supported 00:09:22.061 Copy (19h): Supported LBA-Change 00:09:22.061 Unknown (1Dh): Supported LBA-Change 00:09:22.061 00:09:22.061 Error Log 00:09:22.061 ========= 00:09:22.061 00:09:22.061 Arbitration 00:09:22.061 =========== 00:09:22.061 Arbitration Burst: no limit 00:09:22.061 00:09:22.061 Power Management 00:09:22.061 ================ 00:09:22.061 Number of Power States: 1 00:09:22.061 Current Power State: Power State #0 00:09:22.061 Power State #0: 00:09:22.061 Max Power: 25.00 W 00:09:22.061 Non-Operational State: Operational 00:09:22.061 Entry Latency: 16 microseconds 00:09:22.061 Exit Latency: 4 microseconds 00:09:22.061 Relative Read Throughput: 0 00:09:22.061 Relative Read Latency: 0 00:09:22.061 Relative Write Throughput: 0 00:09:22.061 Relative Write Latency: 0 00:09:22.061 Idle Power: Not Reported 00:09:22.061 Active Power: Not Reported 00:09:22.061 Non-Operational Permissive Mode: Not Supported 00:09:22.061 00:09:22.061 Health Information 00:09:22.061 ================== 00:09:22.061 Critical Warnings: 00:09:22.061 Available Spare Space: OK 00:09:22.061 Temperature: OK 00:09:22.061 Device Reliability: OK 00:09:22.061 Read Only: No 00:09:22.061 Volatile Memory Backup: OK 00:09:22.061 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.061 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.061 Available Spare: 0% 00:09:22.061 Available Spare Threshold: 0% 00:09:22.061 Life Percentage Used: 0% 00:09:22.061 Data Units Read: 1819 00:09:22.061 Data Units Written: 841 00:09:22.061 Host Read Commands: 90197 00:09:22.061 Host Write Commands: 44911 00:09:22.061 Controller Busy Time: 0 minutes 00:09:22.061 Power Cycles: 0 00:09:22.061 Power On Hours: 0 hours 00:09:22.061 Unsafe Shutdowns: 0 00:09:22.061 Unrecoverable Media Errors: 0 00:09:22.061 Lifetime Error Log Entries: 0 00:09:22.061 Warning Temperature Time: 0 minutes 00:09:22.061 Critical Temperature Time: 0 minutes 00:09:22.061 00:09:22.061 Number of Queues 00:09:22.061 ================ 00:09:22.061 Number of I/O Submission Queues: 64 00:09:22.061 Number of I/O Completion Queues: 64 00:09:22.061 00:09:22.061 ZNS Specific Controller Data 00:09:22.061 ============================ 00:09:22.061 Zone Append Size Limit: 0 00:09:22.061 00:09:22.061 00:09:22.061 Active Namespaces 00:09:22.061 ================= 00:09:22.061 Namespace ID:1 00:09:22.061 Error Recovery Timeout: Unlimited 00:09:22.061 Command Set Identifier: NVM (00h) 00:09:22.061 Deallocate: Supported 00:09:22.061 Deallocated/Unwritten Error: Supported 00:09:22.061 Deallocated Read Value: All 0x00 00:09:22.061 Deallocate in Write Zeroes: Not Supported 00:09:22.061 Deallocated Guard Field: 0xFFFF 00:09:22.061 Flush: Supported 00:09:22.061 Reservation: Not Supported 00:09:22.061 Metadata Transferred as: Separate Metadata Buffer 00:09:22.061 Namespace Sharing Capabilities: Private 00:09:22.061 Size (in LBAs): 1548666 (5GiB) 00:09:22.061 Capacity (in LBAs): 1548666 (5GiB) 00:09:22.061 Utilization (in LBAs): 1548666 (5GiB) 00:09:22.061 Thin Provisioning: Not Supported 00:09:22.061 Per-NS Atomic Units: No 00:09:22.061 Maximum Single Source Range Length: 128 00:09:22.061 Maximum Copy Length: 128 00:09:22.061 Maximum Source Range Count: 128 00:09:22.061 NGUID/EUI64 Never Reused: No 00:09:22.061 Namespace Write Protected: No 00:09:22.061 Number of LBA Formats: 8 00:09:22.061 Current LBA Format: LBA Format #07 00:09:22.061 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.061 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.061 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.061 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.061 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.061 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.061 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.061 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.061 00:09:22.061 14:55:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.061 14:55:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:22.320 ===================================================== 00:09:22.320 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:22.320 ===================================================== 00:09:22.320 Controller Capabilities/Features 00:09:22.320 ================================ 00:09:22.320 Vendor ID: 1b36 00:09:22.320 Subsystem Vendor ID: 1af4 00:09:22.320 Serial Number: 12341 00:09:22.320 Model Number: QEMU NVMe Ctrl 00:09:22.320 Firmware Version: 8.0.0 00:09:22.320 Recommended Arb Burst: 6 00:09:22.320 IEEE OUI Identifier: 00 54 52 00:09:22.320 Multi-path I/O 00:09:22.320 May have multiple subsystem ports: No 00:09:22.320 May have multiple controllers: No 00:09:22.320 Associated with SR-IOV VF: No 00:09:22.320 Max Data Transfer Size: 524288 00:09:22.320 Max Number of Namespaces: 256 00:09:22.320 Max Number of I/O Queues: 64 00:09:22.320 NVMe Specification Version (VS): 1.4 00:09:22.320 NVMe Specification Version (Identify): 1.4 00:09:22.320 Maximum Queue Entries: 2048 00:09:22.320 Contiguous Queues Required: Yes 00:09:22.320 Arbitration Mechanisms Supported 00:09:22.320 Weighted Round Robin: Not Supported 00:09:22.320 Vendor Specific: Not Supported 00:09:22.320 Reset Timeout: 7500 ms 00:09:22.320 Doorbell Stride: 4 bytes 00:09:22.320 NVM Subsystem Reset: Not Supported 00:09:22.320 Command Sets Supported 00:09:22.320 NVM Command Set: Supported 00:09:22.320 Boot Partition: Not Supported 00:09:22.320 Memory Page Size Minimum: 4096 bytes 00:09:22.320 Memory Page Size Maximum: 65536 bytes 00:09:22.320 Persistent Memory Region: Not Supported 00:09:22.320 Optional Asynchronous Events Supported 00:09:22.320 Namespace Attribute Notices: Supported 00:09:22.320 Firmware Activation Notices: Not Supported 00:09:22.320 ANA Change Notices: Not Supported 00:09:22.320 PLE Aggregate Log Change Notices: Not Supported 00:09:22.320 LBA Status Info Alert Notices: Not Supported 00:09:22.320 EGE Aggregate Log Change Notices: Not Supported 00:09:22.320 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.320 Zone Descriptor Change Notices: Not Supported 00:09:22.320 Discovery Log Change Notices: Not Supported 00:09:22.320 Controller Attributes 00:09:22.320 128-bit Host Identifier: Not Supported 00:09:22.320 Non-Operational Permissive Mode: Not Supported 00:09:22.320 NVM Sets: Not Supported 00:09:22.320 Read Recovery Levels: Not Supported 00:09:22.320 Endurance Groups: Not Supported 00:09:22.320 Predictable Latency Mode: Not Supported 00:09:22.320 Traffic Based Keep ALive: Not Supported 00:09:22.320 Namespace Granularity: Not Supported 00:09:22.320 SQ Associations: Not Supported 00:09:22.320 UUID List: Not Supported 00:09:22.320 Multi-Domain Subsystem: Not Supported 00:09:22.320 Fixed Capacity Management: Not Supported 00:09:22.320 Variable Capacity Management: Not Supported 00:09:22.320 Delete Endurance Group: Not Supported 00:09:22.320 Delete NVM Set: Not Supported 00:09:22.320 Extended LBA Formats Supported: Supported 00:09:22.320 Flexible Data Placement Supported: Not Supported 00:09:22.320 00:09:22.320 Controller Memory Buffer Support 00:09:22.320 ================================ 00:09:22.320 Supported: No 00:09:22.320 00:09:22.320 Persistent Memory Region Support 00:09:22.320 ================================ 00:09:22.320 Supported: No 00:09:22.320 00:09:22.320 Admin Command Set Attributes 00:09:22.320 ============================ 00:09:22.320 Security Send/Receive: Not Supported 00:09:22.320 Format NVM: Supported 00:09:22.320 Firmware Activate/Download: Not Supported 00:09:22.320 Namespace Management: Supported 00:09:22.320 Device Self-Test: Not Supported 00:09:22.320 Directives: Supported 00:09:22.320 NVMe-MI: Not Supported 00:09:22.320 Virtualization Management: Not Supported 00:09:22.320 Doorbell Buffer Config: Supported 00:09:22.320 Get LBA Status Capability: Not Supported 00:09:22.320 Command & Feature Lockdown Capability: Not Supported 00:09:22.320 Abort Command Limit: 4 00:09:22.320 Async Event Request Limit: 4 00:09:22.320 Number of Firmware Slots: N/A 00:09:22.320 Firmware Slot 1 Read-Only: N/A 00:09:22.320 Firmware Activation Without Reset: N/A 00:09:22.320 Multiple Update Detection Support: N/A 00:09:22.320 Firmware Update Granularity: No Information Provided 00:09:22.320 Per-Namespace SMART Log: Yes 00:09:22.320 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.320 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:22.320 Command Effects Log Page: Supported 00:09:22.320 Get Log Page Extended Data: Supported 00:09:22.320 Telemetry Log Pages: Not Supported 00:09:22.320 Persistent Event Log Pages: Not Supported 00:09:22.320 Supported Log Pages Log Page: May Support 00:09:22.320 Commands Supported & Effects Log Page: Not Supported 00:09:22.320 Feature Identifiers & Effects Log Page:May Support 00:09:22.320 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.320 Data Area 4 for Telemetry Log: Not Supported 00:09:22.320 Error Log Page Entries Supported: 1 00:09:22.320 Keep Alive: Not Supported 00:09:22.320 00:09:22.320 NVM Command Set Attributes 00:09:22.320 ========================== 00:09:22.320 Submission Queue Entry Size 00:09:22.320 Max: 64 00:09:22.320 Min: 64 00:09:22.320 Completion Queue Entry Size 00:09:22.320 Max: 16 00:09:22.320 Min: 16 00:09:22.320 Number of Namespaces: 256 00:09:22.320 Compare Command: Supported 00:09:22.320 Write Uncorrectable Command: Not Supported 00:09:22.320 Dataset Management Command: Supported 00:09:22.320 Write Zeroes Command: Supported 00:09:22.320 Set Features Save Field: Supported 00:09:22.320 Reservations: Not Supported 00:09:22.320 Timestamp: Supported 00:09:22.320 Copy: Supported 00:09:22.320 Volatile Write Cache: Present 00:09:22.320 Atomic Write Unit (Normal): 1 00:09:22.320 Atomic Write Unit (PFail): 1 00:09:22.320 Atomic Compare & Write Unit: 1 00:09:22.320 Fused Compare & Write: Not Supported 00:09:22.320 Scatter-Gather List 00:09:22.320 SGL Command Set: Supported 00:09:22.320 SGL Keyed: Not Supported 00:09:22.320 SGL Bit Bucket Descriptor: Not Supported 00:09:22.320 SGL Metadata Pointer: Not Supported 00:09:22.320 Oversized SGL: Not Supported 00:09:22.320 SGL Metadata Address: Not Supported 00:09:22.320 SGL Offset: Not Supported 00:09:22.320 Transport SGL Data Block: Not Supported 00:09:22.320 Replay Protected Memory Block: Not Supported 00:09:22.320 00:09:22.320 Firmware Slot Information 00:09:22.320 ========================= 00:09:22.320 Active slot: 1 00:09:22.320 Slot 1 Firmware Revision: 1.0 00:09:22.320 00:09:22.320 00:09:22.320 Commands Supported and Effects 00:09:22.320 ============================== 00:09:22.320 Admin Commands 00:09:22.320 -------------- 00:09:22.320 Delete I/O Submission Queue (00h): Supported 00:09:22.320 Create I/O Submission Queue (01h): Supported 00:09:22.320 Get Log Page (02h): Supported 00:09:22.320 Delete I/O Completion Queue (04h): Supported 00:09:22.320 Create I/O Completion Queue (05h): Supported 00:09:22.320 Identify (06h): Supported 00:09:22.320 Abort (08h): Supported 00:09:22.320 Set Features (09h): Supported 00:09:22.320 Get Features (0Ah): Supported 00:09:22.320 Asynchronous Event Request (0Ch): Supported 00:09:22.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.320 Directive Send (19h): Supported 00:09:22.320 Directive Receive (1Ah): Supported 00:09:22.320 Virtualization Management (1Ch): Supported 00:09:22.321 Doorbell Buffer Config (7Ch): Supported 00:09:22.321 Format NVM (80h): Supported LBA-Change 00:09:22.321 I/O Commands 00:09:22.321 ------------ 00:09:22.321 Flush (00h): Supported LBA-Change 00:09:22.321 Write (01h): Supported LBA-Change 00:09:22.321 Read (02h): Supported 00:09:22.321 Compare (05h): Supported 00:09:22.321 Write Zeroes (08h): Supported LBA-Change 00:09:22.321 Dataset Management (09h): Supported LBA-Change 00:09:22.321 Unknown (0Ch): Supported 00:09:22.321 Unknown (12h): Supported 00:09:22.321 Copy (19h): Supported LBA-Change 00:09:22.321 Unknown (1Dh): Supported LBA-Change 00:09:22.321 00:09:22.321 Error Log 00:09:22.321 ========= 00:09:22.321 00:09:22.321 Arbitration 00:09:22.321 =========== 00:09:22.321 Arbitration Burst: no limit 00:09:22.321 00:09:22.321 Power Management 00:09:22.321 ================ 00:09:22.321 Number of Power States: 1 00:09:22.321 Current Power State: Power State #0 00:09:22.321 Power State #0: 00:09:22.321 Max Power: 25.00 W 00:09:22.321 Non-Operational State: Operational 00:09:22.321 Entry Latency: 16 microseconds 00:09:22.321 Exit Latency: 4 microseconds 00:09:22.321 Relative Read Throughput: 0 00:09:22.321 Relative Read Latency: 0 00:09:22.321 Relative Write Throughput: 0 00:09:22.321 Relative Write Latency: 0 00:09:22.321 Idle Power: Not Reported 00:09:22.321 Active Power: Not Reported 00:09:22.321 Non-Operational Permissive Mode: Not Supported 00:09:22.321 00:09:22.321 Health Information 00:09:22.321 ================== 00:09:22.321 Critical Warnings: 00:09:22.321 Available Spare Space: OK 00:09:22.321 Temperature: OK 00:09:22.321 Device Reliability: OK 00:09:22.321 Read Only: No 00:09:22.321 Volatile Memory Backup: OK 00:09:22.321 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.321 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.321 Available Spare: 0% 00:09:22.321 Available Spare Threshold: 0% 00:09:22.321 Life Percentage Used: 0% 00:09:22.321 Data Units Read: 1227 00:09:22.321 Data Units Written: 569 00:09:22.321 Host Read Commands: 61387 00:09:22.321 Host Write Commands: 30254 00:09:22.321 Controller Busy Time: 0 minutes 00:09:22.321 Power Cycles: 0 00:09:22.321 Power On Hours: 0 hours 00:09:22.321 Unsafe Shutdowns: 0 00:09:22.321 Unrecoverable Media Errors: 0 00:09:22.321 Lifetime Error Log Entries: 0 00:09:22.321 Warning Temperature Time: 0 minutes 00:09:22.321 Critical Temperature Time: 0 minutes 00:09:22.321 00:09:22.321 Number of Queues 00:09:22.321 ================ 00:09:22.321 Number of I/O Submission Queues: 64 00:09:22.321 Number of I/O Completion Queues: 64 00:09:22.321 00:09:22.321 ZNS Specific Controller Data 00:09:22.321 ============================ 00:09:22.321 Zone Append Size Limit: 0 00:09:22.321 00:09:22.321 00:09:22.321 Active Namespaces 00:09:22.321 ================= 00:09:22.321 Namespace ID:1 00:09:22.321 Error Recovery Timeout: Unlimited 00:09:22.321 Command Set Identifier: NVM (00h) 00:09:22.321 Deallocate: Supported 00:09:22.321 Deallocated/Unwritten Error: Supported 00:09:22.321 Deallocated Read Value: All 0x00 00:09:22.321 Deallocate in Write Zeroes: Not Supported 00:09:22.321 Deallocated Guard Field: 0xFFFF 00:09:22.321 Flush: Supported 00:09:22.321 Reservation: Not Supported 00:09:22.321 Namespace Sharing Capabilities: Private 00:09:22.321 Size (in LBAs): 1310720 (5GiB) 00:09:22.321 Capacity (in LBAs): 1310720 (5GiB) 00:09:22.321 Utilization (in LBAs): 1310720 (5GiB) 00:09:22.321 Thin Provisioning: Not Supported 00:09:22.321 Per-NS Atomic Units: No 00:09:22.321 Maximum Single Source Range Length: 128 00:09:22.321 Maximum Copy Length: 128 00:09:22.321 Maximum Source Range Count: 128 00:09:22.321 NGUID/EUI64 Never Reused: No 00:09:22.321 Namespace Write Protected: No 00:09:22.321 Number of LBA Formats: 8 00:09:22.321 Current LBA Format: LBA Format #04 00:09:22.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.321 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.321 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.321 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.321 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.321 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.321 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.321 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.321 00:09:22.321 14:55:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.321 14:55:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:22.581 ===================================================== 00:09:22.581 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:22.581 ===================================================== 00:09:22.581 Controller Capabilities/Features 00:09:22.581 ================================ 00:09:22.581 Vendor ID: 1b36 00:09:22.581 Subsystem Vendor ID: 1af4 00:09:22.581 Serial Number: 12342 00:09:22.581 Model Number: QEMU NVMe Ctrl 00:09:22.581 Firmware Version: 8.0.0 00:09:22.581 Recommended Arb Burst: 6 00:09:22.581 IEEE OUI Identifier: 00 54 52 00:09:22.581 Multi-path I/O 00:09:22.581 May have multiple subsystem ports: No 00:09:22.581 May have multiple controllers: No 00:09:22.581 Associated with SR-IOV VF: No 00:09:22.581 Max Data Transfer Size: 524288 00:09:22.581 Max Number of Namespaces: 256 00:09:22.581 Max Number of I/O Queues: 64 00:09:22.581 NVMe Specification Version (VS): 1.4 00:09:22.581 NVMe Specification Version (Identify): 1.4 00:09:22.581 Maximum Queue Entries: 2048 00:09:22.581 Contiguous Queues Required: Yes 00:09:22.581 Arbitration Mechanisms Supported 00:09:22.581 Weighted Round Robin: Not Supported 00:09:22.581 Vendor Specific: Not Supported 00:09:22.581 Reset Timeout: 7500 ms 00:09:22.581 Doorbell Stride: 4 bytes 00:09:22.581 NVM Subsystem Reset: Not Supported 00:09:22.581 Command Sets Supported 00:09:22.581 NVM Command Set: Supported 00:09:22.581 Boot Partition: Not Supported 00:09:22.581 Memory Page Size Minimum: 4096 bytes 00:09:22.581 Memory Page Size Maximum: 65536 bytes 00:09:22.581 Persistent Memory Region: Not Supported 00:09:22.581 Optional Asynchronous Events Supported 00:09:22.581 Namespace Attribute Notices: Supported 00:09:22.581 Firmware Activation Notices: Not Supported 00:09:22.581 ANA Change Notices: Not Supported 00:09:22.581 PLE Aggregate Log Change Notices: Not Supported 00:09:22.581 LBA Status Info Alert Notices: Not Supported 00:09:22.581 EGE Aggregate Log Change Notices: Not Supported 00:09:22.581 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.581 Zone Descriptor Change Notices: Not Supported 00:09:22.581 Discovery Log Change Notices: Not Supported 00:09:22.581 Controller Attributes 00:09:22.581 128-bit Host Identifier: Not Supported 00:09:22.581 Non-Operational Permissive Mode: Not Supported 00:09:22.581 NVM Sets: Not Supported 00:09:22.581 Read Recovery Levels: Not Supported 00:09:22.581 Endurance Groups: Not Supported 00:09:22.581 Predictable Latency Mode: Not Supported 00:09:22.581 Traffic Based Keep ALive: Not Supported 00:09:22.581 Namespace Granularity: Not Supported 00:09:22.581 SQ Associations: Not Supported 00:09:22.581 UUID List: Not Supported 00:09:22.581 Multi-Domain Subsystem: Not Supported 00:09:22.581 Fixed Capacity Management: Not Supported 00:09:22.581 Variable Capacity Management: Not Supported 00:09:22.581 Delete Endurance Group: Not Supported 00:09:22.581 Delete NVM Set: Not Supported 00:09:22.581 Extended LBA Formats Supported: Supported 00:09:22.581 Flexible Data Placement Supported: Not Supported 00:09:22.581 00:09:22.581 Controller Memory Buffer Support 00:09:22.581 ================================ 00:09:22.581 Supported: No 00:09:22.581 00:09:22.581 Persistent Memory Region Support 00:09:22.581 ================================ 00:09:22.581 Supported: No 00:09:22.581 00:09:22.581 Admin Command Set Attributes 00:09:22.581 ============================ 00:09:22.581 Security Send/Receive: Not Supported 00:09:22.581 Format NVM: Supported 00:09:22.581 Firmware Activate/Download: Not Supported 00:09:22.581 Namespace Management: Supported 00:09:22.581 Device Self-Test: Not Supported 00:09:22.581 Directives: Supported 00:09:22.581 NVMe-MI: Not Supported 00:09:22.581 Virtualization Management: Not Supported 00:09:22.581 Doorbell Buffer Config: Supported 00:09:22.581 Get LBA Status Capability: Not Supported 00:09:22.581 Command & Feature Lockdown Capability: Not Supported 00:09:22.581 Abort Command Limit: 4 00:09:22.581 Async Event Request Limit: 4 00:09:22.581 Number of Firmware Slots: N/A 00:09:22.581 Firmware Slot 1 Read-Only: N/A 00:09:22.581 Firmware Activation Without Reset: N/A 00:09:22.581 Multiple Update Detection Support: N/A 00:09:22.581 Firmware Update Granularity: No Information Provided 00:09:22.581 Per-Namespace SMART Log: Yes 00:09:22.581 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.581 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:22.581 Command Effects Log Page: Supported 00:09:22.581 Get Log Page Extended Data: Supported 00:09:22.581 Telemetry Log Pages: Not Supported 00:09:22.581 Persistent Event Log Pages: Not Supported 00:09:22.581 Supported Log Pages Log Page: May Support 00:09:22.581 Commands Supported & Effects Log Page: Not Supported 00:09:22.581 Feature Identifiers & Effects Log Page:May Support 00:09:22.581 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.581 Data Area 4 for Telemetry Log: Not Supported 00:09:22.581 Error Log Page Entries Supported: 1 00:09:22.581 Keep Alive: Not Supported 00:09:22.581 00:09:22.581 NVM Command Set Attributes 00:09:22.581 ========================== 00:09:22.581 Submission Queue Entry Size 00:09:22.581 Max: 64 00:09:22.581 Min: 64 00:09:22.581 Completion Queue Entry Size 00:09:22.581 Max: 16 00:09:22.581 Min: 16 00:09:22.581 Number of Namespaces: 256 00:09:22.581 Compare Command: Supported 00:09:22.581 Write Uncorrectable Command: Not Supported 00:09:22.581 Dataset Management Command: Supported 00:09:22.581 Write Zeroes Command: Supported 00:09:22.581 Set Features Save Field: Supported 00:09:22.581 Reservations: Not Supported 00:09:22.581 Timestamp: Supported 00:09:22.581 Copy: Supported 00:09:22.581 Volatile Write Cache: Present 00:09:22.581 Atomic Write Unit (Normal): 1 00:09:22.581 Atomic Write Unit (PFail): 1 00:09:22.581 Atomic Compare & Write Unit: 1 00:09:22.581 Fused Compare & Write: Not Supported 00:09:22.581 Scatter-Gather List 00:09:22.581 SGL Command Set: Supported 00:09:22.581 SGL Keyed: Not Supported 00:09:22.581 SGL Bit Bucket Descriptor: Not Supported 00:09:22.581 SGL Metadata Pointer: Not Supported 00:09:22.581 Oversized SGL: Not Supported 00:09:22.581 SGL Metadata Address: Not Supported 00:09:22.581 SGL Offset: Not Supported 00:09:22.581 Transport SGL Data Block: Not Supported 00:09:22.581 Replay Protected Memory Block: Not Supported 00:09:22.581 00:09:22.581 Firmware Slot Information 00:09:22.581 ========================= 00:09:22.581 Active slot: 1 00:09:22.581 Slot 1 Firmware Revision: 1.0 00:09:22.581 00:09:22.581 00:09:22.581 Commands Supported and Effects 00:09:22.581 ============================== 00:09:22.581 Admin Commands 00:09:22.581 -------------- 00:09:22.581 Delete I/O Submission Queue (00h): Supported 00:09:22.581 Create I/O Submission Queue (01h): Supported 00:09:22.581 Get Log Page (02h): Supported 00:09:22.581 Delete I/O Completion Queue (04h): Supported 00:09:22.581 Create I/O Completion Queue (05h): Supported 00:09:22.581 Identify (06h): Supported 00:09:22.581 Abort (08h): Supported 00:09:22.581 Set Features (09h): Supported 00:09:22.581 Get Features (0Ah): Supported 00:09:22.581 Asynchronous Event Request (0Ch): Supported 00:09:22.581 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.581 Directive Send (19h): Supported 00:09:22.581 Directive Receive (1Ah): Supported 00:09:22.581 Virtualization Management (1Ch): Supported 00:09:22.581 Doorbell Buffer Config (7Ch): Supported 00:09:22.581 Format NVM (80h): Supported LBA-Change 00:09:22.581 I/O Commands 00:09:22.581 ------------ 00:09:22.581 Flush (00h): Supported LBA-Change 00:09:22.581 Write (01h): Supported LBA-Change 00:09:22.581 Read (02h): Supported 00:09:22.581 Compare (05h): Supported 00:09:22.581 Write Zeroes (08h): Supported LBA-Change 00:09:22.581 Dataset Management (09h): Supported LBA-Change 00:09:22.581 Unknown (0Ch): Supported 00:09:22.582 Unknown (12h): Supported 00:09:22.582 Copy (19h): Supported LBA-Change 00:09:22.582 Unknown (1Dh): Supported LBA-Change 00:09:22.582 00:09:22.582 Error Log 00:09:22.582 ========= 00:09:22.582 00:09:22.582 Arbitration 00:09:22.582 =========== 00:09:22.582 Arbitration Burst: no limit 00:09:22.582 00:09:22.582 Power Management 00:09:22.582 ================ 00:09:22.582 Number of Power States: 1 00:09:22.582 Current Power State: Power State #0 00:09:22.582 Power State #0: 00:09:22.582 Max Power: 25.00 W 00:09:22.582 Non-Operational State: Operational 00:09:22.582 Entry Latency: 16 microseconds 00:09:22.582 Exit Latency: 4 microseconds 00:09:22.582 Relative Read Throughput: 0 00:09:22.582 Relative Read Latency: 0 00:09:22.582 Relative Write Throughput: 0 00:09:22.582 Relative Write Latency: 0 00:09:22.582 Idle Power: Not Reported 00:09:22.582 Active Power: Not Reported 00:09:22.582 Non-Operational Permissive Mode: Not Supported 00:09:22.582 00:09:22.582 Health Information 00:09:22.582 ================== 00:09:22.582 Critical Warnings: 00:09:22.582 Available Spare Space: OK 00:09:22.582 Temperature: OK 00:09:22.582 Device Reliability: OK 00:09:22.582 Read Only: No 00:09:22.582 Volatile Memory Backup: OK 00:09:22.582 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.582 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.582 Available Spare: 0% 00:09:22.582 Available Spare Threshold: 0% 00:09:22.582 Life Percentage Used: 0% 00:09:22.582 Data Units Read: 3816 00:09:22.582 Data Units Written: 1763 00:09:22.582 Host Read Commands: 185828 00:09:22.582 Host Write Commands: 91387 00:09:22.582 Controller Busy Time: 0 minutes 00:09:22.582 Power Cycles: 0 00:09:22.582 Power On Hours: 0 hours 00:09:22.582 Unsafe Shutdowns: 0 00:09:22.582 Unrecoverable Media Errors: 0 00:09:22.582 Lifetime Error Log Entries: 0 00:09:22.582 Warning Temperature Time: 0 minutes 00:09:22.582 Critical Temperature Time: 0 minutes 00:09:22.582 00:09:22.582 Number of Queues 00:09:22.582 ================ 00:09:22.582 Number of I/O Submission Queues: 64 00:09:22.582 Number of I/O Completion Queues: 64 00:09:22.582 00:09:22.582 ZNS Specific Controller Data 00:09:22.582 ============================ 00:09:22.582 Zone Append Size Limit: 0 00:09:22.582 00:09:22.582 00:09:22.582 Active Namespaces 00:09:22.582 ================= 00:09:22.582 Namespace ID:1 00:09:22.582 Error Recovery Timeout: Unlimited 00:09:22.582 Command Set Identifier: NVM (00h) 00:09:22.582 Deallocate: Supported 00:09:22.582 Deallocated/Unwritten Error: Supported 00:09:22.582 Deallocated Read Value: All 0x00 00:09:22.582 Deallocate in Write Zeroes: Not Supported 00:09:22.582 Deallocated Guard Field: 0xFFFF 00:09:22.582 Flush: Supported 00:09:22.582 Reservation: Not Supported 00:09:22.582 Namespace Sharing Capabilities: Private 00:09:22.582 Size (in LBAs): 1048576 (4GiB) 00:09:22.582 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.582 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.582 Thin Provisioning: Not Supported 00:09:22.582 Per-NS Atomic Units: No 00:09:22.582 Maximum Single Source Range Length: 128 00:09:22.582 Maximum Copy Length: 128 00:09:22.582 Maximum Source Range Count: 128 00:09:22.582 NGUID/EUI64 Never Reused: No 00:09:22.582 Namespace Write Protected: No 00:09:22.582 Number of LBA Formats: 8 00:09:22.582 Current LBA Format: LBA Format #04 00:09:22.582 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.582 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.582 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.582 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.582 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.582 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.582 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.582 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.582 00:09:22.582 Namespace ID:2 00:09:22.582 Error Recovery Timeout: Unlimited 00:09:22.582 Command Set Identifier: NVM (00h) 00:09:22.582 Deallocate: Supported 00:09:22.582 Deallocated/Unwritten Error: Supported 00:09:22.582 Deallocated Read Value: All 0x00 00:09:22.582 Deallocate in Write Zeroes: Not Supported 00:09:22.582 Deallocated Guard Field: 0xFFFF 00:09:22.582 Flush: Supported 00:09:22.582 Reservation: Not Supported 00:09:22.582 Namespace Sharing Capabilities: Private 00:09:22.582 Size (in LBAs): 1048576 (4GiB) 00:09:22.582 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.582 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.582 Thin Provisioning: Not Supported 00:09:22.582 Per-NS Atomic Units: No 00:09:22.582 Maximum Single Source Range Length: 128 00:09:22.582 Maximum Copy Length: 128 00:09:22.582 Maximum Source Range Count: 128 00:09:22.582 NGUID/EUI64 Never Reused: No 00:09:22.582 Namespace Write Protected: No 00:09:22.582 Number of LBA Formats: 8 00:09:22.582 Current LBA Format: LBA Format #04 00:09:22.582 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.582 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.582 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.582 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.582 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.582 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.582 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.582 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.582 00:09:22.582 Namespace ID:3 00:09:22.582 Error Recovery Timeout: Unlimited 00:09:22.582 Command Set Identifier: NVM (00h) 00:09:22.582 Deallocate: Supported 00:09:22.582 Deallocated/Unwritten Error: Supported 00:09:22.582 Deallocated Read Value: All 0x00 00:09:22.582 Deallocate in Write Zeroes: Not Supported 00:09:22.582 Deallocated Guard Field: 0xFFFF 00:09:22.582 Flush: Supported 00:09:22.582 Reservation: Not Supported 00:09:22.582 Namespace Sharing Capabilities: Private 00:09:22.582 Size (in LBAs): 1048576 (4GiB) 00:09:22.582 Capacity (in LBAs): 1048576 (4GiB) 00:09:22.582 Utilization (in LBAs): 1048576 (4GiB) 00:09:22.582 Thin Provisioning: Not Supported 00:09:22.582 Per-NS Atomic Units: No 00:09:22.582 Maximum Single Source Range Length: 128 00:09:22.582 Maximum Copy Length: 128 00:09:22.582 Maximum Source Range Count: 128 00:09:22.582 NGUID/EUI64 Never Reused: No 00:09:22.582 Namespace Write Protected: No 00:09:22.582 Number of LBA Formats: 8 00:09:22.582 Current LBA Format: LBA Format #04 00:09:22.582 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.582 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.582 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.582 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.582 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.582 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.582 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.582 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.582 00:09:22.582 14:55:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.582 14:55:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:22.582 ===================================================== 00:09:22.582 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:22.582 ===================================================== 00:09:22.582 Controller Capabilities/Features 00:09:22.582 ================================ 00:09:22.582 Vendor ID: 1b36 00:09:22.582 Subsystem Vendor ID: 1af4 00:09:22.582 Serial Number: 12343 00:09:22.582 Model Number: QEMU NVMe Ctrl 00:09:22.582 Firmware Version: 8.0.0 00:09:22.582 Recommended Arb Burst: 6 00:09:22.582 IEEE OUI Identifier: 00 54 52 00:09:22.582 Multi-path I/O 00:09:22.582 May have multiple subsystem ports: No 00:09:22.582 May have multiple controllers: Yes 00:09:22.582 Associated with SR-IOV VF: No 00:09:22.582 Max Data Transfer Size: 524288 00:09:22.582 Max Number of Namespaces: 256 00:09:22.582 Max Number of I/O Queues: 64 00:09:22.582 NVMe Specification Version (VS): 1.4 00:09:22.582 NVMe Specification Version (Identify): 1.4 00:09:22.582 Maximum Queue Entries: 2048 00:09:22.582 Contiguous Queues Required: Yes 00:09:22.582 Arbitration Mechanisms Supported 00:09:22.582 Weighted Round Robin: Not Supported 00:09:22.582 Vendor Specific: Not Supported 00:09:22.582 Reset Timeout: 7500 ms 00:09:22.582 Doorbell Stride: 4 bytes 00:09:22.583 NVM Subsystem Reset: Not Supported 00:09:22.583 Command Sets Supported 00:09:22.583 NVM Command Set: Supported 00:09:22.583 Boot Partition: Not Supported 00:09:22.583 Memory Page Size Minimum: 4096 bytes 00:09:22.583 Memory Page Size Maximum: 65536 bytes 00:09:22.583 Persistent Memory Region: Not Supported 00:09:22.583 Optional Asynchronous Events Supported 00:09:22.583 Namespace Attribute Notices: Supported 00:09:22.583 Firmware Activation Notices: Not Supported 00:09:22.583 ANA Change Notices: Not Supported 00:09:22.583 PLE Aggregate Log Change Notices: Not Supported 00:09:22.583 LBA Status Info Alert Notices: Not Supported 00:09:22.583 EGE Aggregate Log Change Notices: Not Supported 00:09:22.583 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.583 Zone Descriptor Change Notices: Not Supported 00:09:22.583 Discovery Log Change Notices: Not Supported 00:09:22.583 Controller Attributes 00:09:22.583 128-bit Host Identifier: Not Supported 00:09:22.583 Non-Operational Permissive Mode: Not Supported 00:09:22.583 NVM Sets: Not Supported 00:09:22.583 Read Recovery Levels: Not Supported 00:09:22.583 Endurance Groups: Supported 00:09:22.583 Predictable Latency Mode: Not Supported 00:09:22.583 Traffic Based Keep ALive: Not Supported 00:09:22.583 Namespace Granularity: Not Supported 00:09:22.583 SQ Associations: Not Supported 00:09:22.583 UUID List: Not Supported 00:09:22.583 Multi-Domain Subsystem: Not Supported 00:09:22.583 Fixed Capacity Management: Not Supported 00:09:22.583 Variable Capacity Management: Not Supported 00:09:22.583 Delete Endurance Group: Not Supported 00:09:22.583 Delete NVM Set: Not Supported 00:09:22.583 Extended LBA Formats Supported: Supported 00:09:22.583 Flexible Data Placement Supported: Supported 00:09:22.583 00:09:22.583 Controller Memory Buffer Support 00:09:22.583 ================================ 00:09:22.583 Supported: No 00:09:22.583 00:09:22.583 Persistent Memory Region Support 00:09:22.583 ================================ 00:09:22.583 Supported: No 00:09:22.583 00:09:22.583 Admin Command Set Attributes 00:09:22.583 ============================ 00:09:22.583 Security Send/Receive: Not Supported 00:09:22.583 Format NVM: Supported 00:09:22.583 Firmware Activate/Download: Not Supported 00:09:22.583 Namespace Management: Supported 00:09:22.583 Device Self-Test: Not Supported 00:09:22.583 Directives: Supported 00:09:22.583 NVMe-MI: Not Supported 00:09:22.583 Virtualization Management: Not Supported 00:09:22.583 Doorbell Buffer Config: Supported 00:09:22.583 Get LBA Status Capability: Not Supported 00:09:22.583 Command & Feature Lockdown Capability: Not Supported 00:09:22.583 Abort Command Limit: 4 00:09:22.583 Async Event Request Limit: 4 00:09:22.583 Number of Firmware Slots: N/A 00:09:22.583 Firmware Slot 1 Read-Only: N/A 00:09:22.583 Firmware Activation Without Reset: N/A 00:09:22.583 Multiple Update Detection Support: N/A 00:09:22.583 Firmware Update Granularity: No Information Provided 00:09:22.583 Per-Namespace SMART Log: Yes 00:09:22.583 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.583 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:22.583 Command Effects Log Page: Supported 00:09:22.583 Get Log Page Extended Data: Supported 00:09:22.583 Telemetry Log Pages: Not Supported 00:09:22.583 Persistent Event Log Pages: Not Supported 00:09:22.583 Supported Log Pages Log Page: May Support 00:09:22.583 Commands Supported & Effects Log Page: Not Supported 00:09:22.583 Feature Identifiers & Effects Log Page:May Support 00:09:22.583 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.583 Data Area 4 for Telemetry Log: Not Supported 00:09:22.583 Error Log Page Entries Supported: 1 00:09:22.583 Keep Alive: Not Supported 00:09:22.583 00:09:22.583 NVM Command Set Attributes 00:09:22.583 ========================== 00:09:22.583 Submission Queue Entry Size 00:09:22.583 Max: 64 00:09:22.583 Min: 64 00:09:22.583 Completion Queue Entry Size 00:09:22.583 Max: 16 00:09:22.583 Min: 16 00:09:22.583 Number of Namespaces: 256 00:09:22.583 Compare Command: Supported 00:09:22.583 Write Uncorrectable Command: Not Supported 00:09:22.583 Dataset Management Command: Supported 00:09:22.583 Write Zeroes Command: Supported 00:09:22.583 Set Features Save Field: Supported 00:09:22.583 Reservations: Not Supported 00:09:22.583 Timestamp: Supported 00:09:22.583 Copy: Supported 00:09:22.583 Volatile Write Cache: Present 00:09:22.583 Atomic Write Unit (Normal): 1 00:09:22.583 Atomic Write Unit (PFail): 1 00:09:22.583 Atomic Compare & Write Unit: 1 00:09:22.583 Fused Compare & Write: Not Supported 00:09:22.583 Scatter-Gather List 00:09:22.583 SGL Command Set: Supported 00:09:22.583 SGL Keyed: Not Supported 00:09:22.583 SGL Bit Bucket Descriptor: Not Supported 00:09:22.583 SGL Metadata Pointer: Not Supported 00:09:22.583 Oversized SGL: Not Supported 00:09:22.583 SGL Metadata Address: Not Supported 00:09:22.583 SGL Offset: Not Supported 00:09:22.583 Transport SGL Data Block: Not Supported 00:09:22.583 Replay Protected Memory Block: Not Supported 00:09:22.583 00:09:22.583 Firmware Slot Information 00:09:22.583 ========================= 00:09:22.583 Active slot: 1 00:09:22.583 Slot 1 Firmware Revision: 1.0 00:09:22.583 00:09:22.583 00:09:22.583 Commands Supported and Effects 00:09:22.583 ============================== 00:09:22.583 Admin Commands 00:09:22.583 -------------- 00:09:22.583 Delete I/O Submission Queue (00h): Supported 00:09:22.583 Create I/O Submission Queue (01h): Supported 00:09:22.583 Get Log Page (02h): Supported 00:09:22.583 Delete I/O Completion Queue (04h): Supported 00:09:22.583 Create I/O Completion Queue (05h): Supported 00:09:22.583 Identify (06h): Supported 00:09:22.583 Abort (08h): Supported 00:09:22.583 Set Features (09h): Supported 00:09:22.583 Get Features (0Ah): Supported 00:09:22.583 Asynchronous Event Request (0Ch): Supported 00:09:22.583 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.583 Directive Send (19h): Supported 00:09:22.583 Directive Receive (1Ah): Supported 00:09:22.583 Virtualization Management (1Ch): Supported 00:09:22.583 Doorbell Buffer Config (7Ch): Supported 00:09:22.583 Format NVM (80h): Supported LBA-Change 00:09:22.583 I/O Commands 00:09:22.583 ------------ 00:09:22.583 Flush (00h): Supported LBA-Change 00:09:22.583 Write (01h): Supported LBA-Change 00:09:22.583 Read (02h): Supported 00:09:22.583 Compare (05h): Supported 00:09:22.583 Write Zeroes (08h): Supported LBA-Change 00:09:22.583 Dataset Management (09h): Supported LBA-Change 00:09:22.583 Unknown (0Ch): Supported 00:09:22.583 Unknown (12h): Supported 00:09:22.583 Copy (19h): Supported LBA-Change 00:09:22.583 Unknown (1Dh): Supported LBA-Change 00:09:22.583 00:09:22.583 Error Log 00:09:22.583 ========= 00:09:22.583 00:09:22.583 Arbitration 00:09:22.583 =========== 00:09:22.583 Arbitration Burst: no limit 00:09:22.583 00:09:22.583 Power Management 00:09:22.583 ================ 00:09:22.583 Number of Power States: 1 00:09:22.583 Current Power State: Power State #0 00:09:22.583 Power State #0: 00:09:22.583 Max Power: 25.00 W 00:09:22.583 Non-Operational State: Operational 00:09:22.583 Entry Latency: 16 microseconds 00:09:22.583 Exit Latency: 4 microseconds 00:09:22.583 Relative Read Throughput: 0 00:09:22.583 Relative Read Latency: 0 00:09:22.583 Relative Write Throughput: 0 00:09:22.583 Relative Write Latency: 0 00:09:22.583 Idle Power: Not Reported 00:09:22.583 Active Power: Not Reported 00:09:22.583 Non-Operational Permissive Mode: Not Supported 00:09:22.583 00:09:22.583 Health Information 00:09:22.583 ================== 00:09:22.583 Critical Warnings: 00:09:22.583 Available Spare Space: OK 00:09:22.583 Temperature: OK 00:09:22.583 Device Reliability: OK 00:09:22.583 Read Only: No 00:09:22.583 Volatile Memory Backup: OK 00:09:22.583 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.583 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.583 Available Spare: 0% 00:09:22.583 Available Spare Threshold: 0% 00:09:22.583 Life Percentage Used: 0% 00:09:22.583 Data Units Read: 1454 00:09:22.583 Data Units Written: 676 00:09:22.583 Host Read Commands: 63280 00:09:22.583 Host Write Commands: 31148 00:09:22.583 Controller Busy Time: 0 minutes 00:09:22.583 Power Cycles: 0 00:09:22.583 Power On Hours: 0 hours 00:09:22.583 Unsafe Shutdowns: 0 00:09:22.583 Unrecoverable Media Errors: 0 00:09:22.583 Lifetime Error Log Entries: 0 00:09:22.584 Warning Temperature Time: 0 minutes 00:09:22.584 Critical Temperature Time: 0 minutes 00:09:22.584 00:09:22.584 Number of Queues 00:09:22.584 ================ 00:09:22.584 Number of I/O Submission Queues: 64 00:09:22.584 Number of I/O Completion Queues: 64 00:09:22.584 00:09:22.584 ZNS Specific Controller Data 00:09:22.584 ============================ 00:09:22.584 Zone Append Size Limit: 0 00:09:22.584 00:09:22.584 00:09:22.584 Active Namespaces 00:09:22.584 ================= 00:09:22.584 Namespace ID:1 00:09:22.584 Error Recovery Timeout: Unlimited 00:09:22.584 Command Set Identifier: NVM (00h) 00:09:22.584 Deallocate: Supported 00:09:22.584 Deallocated/Unwritten Error: Supported 00:09:22.584 Deallocated Read Value: All 0x00 00:09:22.584 Deallocate in Write Zeroes: Not Supported 00:09:22.584 Deallocated Guard Field: 0xFFFF 00:09:22.584 Flush: Supported 00:09:22.584 Reservation: Not Supported 00:09:22.584 Namespace Sharing Capabilities: Multiple Controllers 00:09:22.584 Size (in LBAs): 262144 (1GiB) 00:09:22.584 Capacity (in LBAs): 262144 (1GiB) 00:09:22.584 Utilization (in LBAs): 262144 (1GiB) 00:09:22.584 Thin Provisioning: Not Supported 00:09:22.584 Per-NS Atomic Units: No 00:09:22.584 Maximum Single Source Range Length: 128 00:09:22.584 Maximum Copy Length: 128 00:09:22.584 Maximum Source Range Count: 128 00:09:22.584 NGUID/EUI64 Never Reused: No 00:09:22.584 Namespace Write Protected: No 00:09:22.584 Endurance group ID: 1 00:09:22.584 Number of LBA Formats: 8 00:09:22.584 Current LBA Format: LBA Format #04 00:09:22.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.584 00:09:22.584 Get Feature FDP: 00:09:22.584 ================ 00:09:22.584 Enabled: Yes 00:09:22.584 FDP configuration index: 0 00:09:22.584 00:09:22.584 FDP configurations log page 00:09:22.584 =========================== 00:09:22.584 Number of FDP configurations: 1 00:09:22.584 Version: 0 00:09:22.584 Size: 112 00:09:22.584 FDP Configuration Descriptor: 0 00:09:22.584 Descriptor Size: 96 00:09:22.584 Reclaim Group Identifier format: 2 00:09:22.584 FDP Volatile Write Cache: Not Present 00:09:22.584 FDP Configuration: Valid 00:09:22.584 Vendor Specific Size: 0 00:09:22.584 Number of Reclaim Groups: 2 00:09:22.584 Number of Recalim Unit Handles: 8 00:09:22.584 Max Placement Identifiers: 128 00:09:22.584 Number of Namespaces Suppprted: 256 00:09:22.584 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.584 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.584 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.584 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.584 00:09:22.584 FDP reclaim unit handle usage log page 00:09:22.584 ====================================== 00:09:22.584 Number of Reclaim Unit Handles: 8 00:09:22.584 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.584 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.584 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.584 00:09:22.584 FDP statistics log page 00:09:22.584 ======================= 00:09:22.584 Host bytes with metadata written: 446279680 00:09:22.584 Media bytes with metadata written: 446361600 00:09:22.584 Media bytes erased: 0 00:09:22.584 00:09:22.584 FDP events log page 00:09:22.584 =================== 00:09:22.584 Number of FDP events: 0 00:09:22.584 00:09:22.843 ************************************ 00:09:22.843 END TEST nvme_identify 00:09:22.843 ************************************ 00:09:22.843 00:09:22.843 real 0m1.058s 00:09:22.843 user 0m0.362s 00:09:22.843 sys 0m0.483s 00:09:22.843 14:55:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.843 14:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.843 14:55:46 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:22.843 14:55:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.843 14:55:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.843 14:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.843 ************************************ 00:09:22.843 START TEST nvme_perf 00:09:22.843 ************************************ 00:09:22.843 14:55:46 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:22.843 14:55:46 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:24.217 Initializing NVMe Controllers 00:09:24.217 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:24.217 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:24.217 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:24.217 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:24.217 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:24.217 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:24.217 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:24.217 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:24.217 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:24.217 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:24.217 Initialization complete. Launching workers. 00:09:24.217 ======================================================== 00:09:24.217 Latency(us) 00:09:24.217 Device Information : IOPS MiB/s Average min max 00:09:24.217 PCIE (0000:00:09.0) NSID 1 from core 0: 10104.14 118.41 12667.15 8889.15 23798.52 00:09:24.217 PCIE (0000:00:06.0) NSID 1 from core 0: 10104.14 118.41 12679.35 8708.49 25195.24 00:09:24.217 PCIE (0000:00:07.0) NSID 1 from core 0: 10104.14 118.41 12675.41 9033.55 25977.38 00:09:24.217 PCIE (0000:00:08.0) NSID 1 from core 0: 10104.14 118.41 12669.78 9037.89 27763.38 00:09:24.217 PCIE (0000:00:08.0) NSID 2 from core 0: 10104.14 118.41 12664.58 8603.39 27734.93 00:09:24.217 PCIE (0000:00:08.0) NSID 3 from core 0: 10104.14 118.41 12659.51 6607.22 28320.86 00:09:24.217 ======================================================== 00:09:24.217 Total : 60624.84 710.45 12669.30 6607.22 28320.86 00:09:24.217 00:09:24.217 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:24.217 ================================================================================= 00:09:24.217 1.00000% : 9527.926us 00:09:24.217 10.00000% : 10536.172us 00:09:24.217 25.00000% : 11342.769us 00:09:24.217 50.00000% : 12401.428us 00:09:24.217 75.00000% : 13611.323us 00:09:24.217 90.00000% : 15123.692us 00:09:24.217 95.00000% : 16131.938us 00:09:24.217 98.00000% : 17644.308us 00:09:24.217 99.00000% : 21475.643us 00:09:24.217 99.50000% : 22685.538us 00:09:24.217 99.90000% : 23592.960us 00:09:24.217 99.99000% : 23794.609us 00:09:24.217 99.99900% : 23895.434us 00:09:24.217 99.99990% : 23895.434us 00:09:24.217 99.99999% : 23895.434us 00:09:24.217 00:09:24.217 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:24.217 ================================================================================= 00:09:24.217 1.00000% : 9477.514us 00:09:24.217 10.00000% : 10485.760us 00:09:24.217 25.00000% : 11241.945us 00:09:24.217 50.00000% : 12300.603us 00:09:24.217 75.00000% : 13712.148us 00:09:24.217 90.00000% : 15426.166us 00:09:24.217 95.00000% : 16434.412us 00:09:24.217 98.00000% : 17745.132us 00:09:24.217 99.00000% : 22483.889us 00:09:24.217 99.50000% : 23794.609us 00:09:24.217 99.90000% : 24903.680us 00:09:24.217 99.99000% : 25206.154us 00:09:24.217 99.99900% : 25206.154us 00:09:24.217 99.99990% : 25206.154us 00:09:24.217 99.99999% : 25206.154us 00:09:24.217 00:09:24.217 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:24.217 ================================================================================= 00:09:24.217 1.00000% : 9578.338us 00:09:24.217 10.00000% : 10586.585us 00:09:24.217 25.00000% : 11241.945us 00:09:24.217 50.00000% : 12250.191us 00:09:24.218 75.00000% : 13611.323us 00:09:24.218 90.00000% : 15224.517us 00:09:24.218 95.00000% : 16434.412us 00:09:24.218 98.00000% : 17845.957us 00:09:24.218 99.00000% : 23592.960us 00:09:24.218 99.50000% : 24802.855us 00:09:24.218 99.90000% : 25811.102us 00:09:24.218 99.99000% : 26012.751us 00:09:24.218 99.99900% : 26012.751us 00:09:24.218 99.99990% : 26012.751us 00:09:24.218 99.99999% : 26012.751us 00:09:24.218 00:09:24.218 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:24.218 ================================================================================= 00:09:24.218 1.00000% : 9628.751us 00:09:24.218 10.00000% : 10636.997us 00:09:24.218 25.00000% : 11292.357us 00:09:24.218 50.00000% : 12250.191us 00:09:24.218 75.00000% : 13611.323us 00:09:24.218 90.00000% : 14821.218us 00:09:24.218 95.00000% : 16031.114us 00:09:24.218 98.00000% : 18854.203us 00:09:24.218 99.00000% : 25407.803us 00:09:24.218 99.50000% : 26617.698us 00:09:24.218 99.90000% : 27625.945us 00:09:24.218 99.99000% : 27827.594us 00:09:24.218 99.99900% : 27827.594us 00:09:24.218 99.99990% : 27827.594us 00:09:24.218 99.99999% : 27827.594us 00:09:24.218 00:09:24.218 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:24.218 ================================================================================= 00:09:24.218 1.00000% : 9477.514us 00:09:24.218 10.00000% : 10586.585us 00:09:24.218 25.00000% : 11342.769us 00:09:24.218 50.00000% : 12300.603us 00:09:24.218 75.00000% : 13611.323us 00:09:24.218 90.00000% : 14922.043us 00:09:24.218 95.00000% : 15829.465us 00:09:24.218 98.00000% : 17341.834us 00:09:24.218 99.00000% : 26214.400us 00:09:24.218 99.50000% : 27020.997us 00:09:24.218 99.90000% : 27625.945us 00:09:24.218 99.99000% : 27827.594us 00:09:24.218 99.99900% : 27827.594us 00:09:24.218 99.99990% : 27827.594us 00:09:24.218 99.99999% : 27827.594us 00:09:24.218 00:09:24.218 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:24.218 ================================================================================= 00:09:24.218 1.00000% : 8116.382us 00:09:24.218 10.00000% : 10485.760us 00:09:24.218 25.00000% : 11393.182us 00:09:24.218 50.00000% : 12401.428us 00:09:24.218 75.00000% : 13611.323us 00:09:24.218 90.00000% : 14922.043us 00:09:24.218 95.00000% : 15829.465us 00:09:24.218 98.00000% : 17039.360us 00:09:24.218 99.00000% : 26819.348us 00:09:24.218 99.50000% : 27625.945us 00:09:24.218 99.90000% : 28230.892us 00:09:24.218 99.99000% : 28432.542us 00:09:24.218 99.99900% : 28432.542us 00:09:24.218 99.99990% : 28432.542us 00:09:24.218 99.99999% : 28432.542us 00:09:24.218 00:09:24.218 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:24.218 ============================================================================== 00:09:24.218 Range in us Cumulative IO count 00:09:24.218 8872.566 - 8922.978: 0.0293% ( 3) 00:09:24.218 8922.978 - 8973.391: 0.0586% ( 3) 00:09:24.218 8973.391 - 9023.803: 0.0781% ( 2) 00:09:24.218 9023.803 - 9074.215: 0.1074% ( 3) 00:09:24.218 9074.215 - 9124.628: 0.1367% ( 3) 00:09:24.218 9124.628 - 9175.040: 0.2148% ( 8) 00:09:24.218 9175.040 - 9225.452: 0.3613% ( 15) 00:09:24.218 9225.452 - 9275.865: 0.5273% ( 17) 00:09:24.218 9275.865 - 9326.277: 0.6250% ( 10) 00:09:24.218 9326.277 - 9376.689: 0.7129% ( 9) 00:09:24.218 9376.689 - 9427.102: 0.7812% ( 7) 00:09:24.218 9427.102 - 9477.514: 0.8691% ( 9) 00:09:24.218 9477.514 - 9527.926: 1.0059% ( 14) 00:09:24.218 9527.926 - 9578.338: 1.1621% ( 16) 00:09:24.218 9578.338 - 9628.751: 1.3281% ( 17) 00:09:24.218 9628.751 - 9679.163: 1.5332% ( 21) 00:09:24.218 9679.163 - 9729.575: 1.7871% ( 26) 00:09:24.218 9729.575 - 9779.988: 2.0508% ( 27) 00:09:24.218 9779.988 - 9830.400: 2.3047% ( 26) 00:09:24.218 9830.400 - 9880.812: 2.6172% ( 32) 00:09:24.218 9880.812 - 9931.225: 2.9883% ( 38) 00:09:24.218 9931.225 - 9981.637: 3.3691% ( 39) 00:09:24.218 9981.637 - 10032.049: 3.9551% ( 60) 00:09:24.218 10032.049 - 10082.462: 4.4531% ( 51) 00:09:24.218 10082.462 - 10132.874: 5.0391% ( 60) 00:09:24.218 10132.874 - 10183.286: 5.6055% ( 58) 00:09:24.218 10183.286 - 10233.698: 6.1914% ( 60) 00:09:24.218 10233.698 - 10284.111: 6.7676% ( 59) 00:09:24.218 10284.111 - 10334.523: 7.4023% ( 65) 00:09:24.218 10334.523 - 10384.935: 8.1055% ( 72) 00:09:24.218 10384.935 - 10435.348: 8.7793% ( 69) 00:09:24.218 10435.348 - 10485.760: 9.4336% ( 67) 00:09:24.218 10485.760 - 10536.172: 10.1953% ( 78) 00:09:24.218 10536.172 - 10586.585: 11.0352% ( 86) 00:09:24.218 10586.585 - 10636.997: 11.8652% ( 85) 00:09:24.218 10636.997 - 10687.409: 12.7148% ( 87) 00:09:24.218 10687.409 - 10737.822: 13.5449% ( 85) 00:09:24.218 10737.822 - 10788.234: 14.3652% ( 84) 00:09:24.218 10788.234 - 10838.646: 15.3320% ( 99) 00:09:24.218 10838.646 - 10889.058: 16.2598% ( 95) 00:09:24.218 10889.058 - 10939.471: 17.1875% ( 95) 00:09:24.218 10939.471 - 10989.883: 18.1152% ( 95) 00:09:24.218 10989.883 - 11040.295: 19.0332% ( 94) 00:09:24.218 11040.295 - 11090.708: 19.9512% ( 94) 00:09:24.218 11090.708 - 11141.120: 20.9668% ( 104) 00:09:24.218 11141.120 - 11191.532: 22.0312% ( 109) 00:09:24.218 11191.532 - 11241.945: 23.0957% ( 109) 00:09:24.218 11241.945 - 11292.357: 24.1602% ( 109) 00:09:24.218 11292.357 - 11342.769: 25.3516% ( 122) 00:09:24.218 11342.769 - 11393.182: 26.4746% ( 115) 00:09:24.218 11393.182 - 11443.594: 27.6953% ( 125) 00:09:24.218 11443.594 - 11494.006: 28.8770% ( 121) 00:09:24.218 11494.006 - 11544.418: 30.2148% ( 137) 00:09:24.218 11544.418 - 11594.831: 31.4941% ( 131) 00:09:24.218 11594.831 - 11645.243: 32.7148% ( 125) 00:09:24.218 11645.243 - 11695.655: 33.9941% ( 131) 00:09:24.218 11695.655 - 11746.068: 35.3223% ( 136) 00:09:24.218 11746.068 - 11796.480: 36.5332% ( 124) 00:09:24.218 11796.480 - 11846.892: 37.8711% ( 137) 00:09:24.218 11846.892 - 11897.305: 38.9941% ( 115) 00:09:24.218 11897.305 - 11947.717: 40.1270% ( 116) 00:09:24.218 11947.717 - 11998.129: 41.2695% ( 117) 00:09:24.218 11998.129 - 12048.542: 42.4121% ( 117) 00:09:24.218 12048.542 - 12098.954: 43.5742% ( 119) 00:09:24.218 12098.954 - 12149.366: 44.7168% ( 117) 00:09:24.218 12149.366 - 12199.778: 45.8301% ( 114) 00:09:24.218 12199.778 - 12250.191: 47.0020% ( 120) 00:09:24.218 12250.191 - 12300.603: 48.2422% ( 127) 00:09:24.218 12300.603 - 12351.015: 49.4336% ( 122) 00:09:24.218 12351.015 - 12401.428: 50.6348% ( 123) 00:09:24.218 12401.428 - 12451.840: 51.8164% ( 121) 00:09:24.218 12451.840 - 12502.252: 53.0762% ( 129) 00:09:24.218 12502.252 - 12552.665: 54.3457% ( 130) 00:09:24.218 12552.665 - 12603.077: 55.5664% ( 125) 00:09:24.218 12603.077 - 12653.489: 56.7773% ( 124) 00:09:24.218 12653.489 - 12703.902: 57.9688% ( 122) 00:09:24.218 12703.902 - 12754.314: 59.1504% ( 121) 00:09:24.218 12754.314 - 12804.726: 60.3613% ( 124) 00:09:24.218 12804.726 - 12855.138: 61.5332% ( 120) 00:09:24.218 12855.138 - 12905.551: 62.7246% ( 122) 00:09:24.218 12905.551 - 13006.375: 64.9902% ( 232) 00:09:24.218 13006.375 - 13107.200: 67.0801% ( 214) 00:09:24.218 13107.200 - 13208.025: 68.9844% ( 195) 00:09:24.218 13208.025 - 13308.849: 70.8105% ( 187) 00:09:24.218 13308.849 - 13409.674: 72.5781% ( 181) 00:09:24.218 13409.674 - 13510.498: 74.2871% ( 175) 00:09:24.218 13510.498 - 13611.323: 75.7520% ( 150) 00:09:24.218 13611.323 - 13712.148: 77.1289% ( 141) 00:09:24.218 13712.148 - 13812.972: 78.4375% ( 134) 00:09:24.218 13812.972 - 13913.797: 79.6582% ( 125) 00:09:24.218 13913.797 - 14014.622: 80.7910% ( 116) 00:09:24.218 14014.622 - 14115.446: 81.9043% ( 114) 00:09:24.218 14115.446 - 14216.271: 82.9004% ( 102) 00:09:24.218 14216.271 - 14317.095: 83.7207% ( 84) 00:09:24.218 14317.095 - 14417.920: 84.5801% ( 88) 00:09:24.218 14417.920 - 14518.745: 85.3613% ( 80) 00:09:24.218 14518.745 - 14619.569: 86.1719% ( 83) 00:09:24.218 14619.569 - 14720.394: 86.9727% ( 82) 00:09:24.218 14720.394 - 14821.218: 87.7637% ( 81) 00:09:24.218 14821.218 - 14922.043: 88.5645% ( 82) 00:09:24.218 14922.043 - 15022.868: 89.2676% ( 72) 00:09:24.218 15022.868 - 15123.692: 90.0098% ( 76) 00:09:24.218 15123.692 - 15224.517: 90.6934% ( 70) 00:09:24.218 15224.517 - 15325.342: 91.3672% ( 69) 00:09:24.218 15325.342 - 15426.166: 92.0312% ( 68) 00:09:24.218 15426.166 - 15526.991: 92.6270% ( 61) 00:09:24.218 15526.991 - 15627.815: 93.1445% ( 53) 00:09:24.218 15627.815 - 15728.640: 93.5938% ( 46) 00:09:24.218 15728.640 - 15829.465: 94.0137% ( 43) 00:09:24.218 15829.465 - 15930.289: 94.4043% ( 40) 00:09:24.218 15930.289 - 16031.114: 94.7363% ( 34) 00:09:24.218 16031.114 - 16131.938: 95.0195% ( 29) 00:09:24.218 16131.938 - 16232.763: 95.2930% ( 28) 00:09:24.218 16232.763 - 16333.588: 95.5762% ( 29) 00:09:24.218 16333.588 - 16434.412: 95.8301% ( 26) 00:09:24.218 16434.412 - 16535.237: 96.1133% ( 29) 00:09:24.218 16535.237 - 16636.062: 96.3965% ( 29) 00:09:24.218 16636.062 - 16736.886: 96.6895% ( 30) 00:09:24.218 16736.886 - 16837.711: 96.9238% ( 24) 00:09:24.218 16837.711 - 16938.535: 97.1680% ( 25) 00:09:24.218 16938.535 - 17039.360: 97.3828% ( 22) 00:09:24.218 17039.360 - 17140.185: 97.5098% ( 13) 00:09:24.218 17140.185 - 17241.009: 97.6367% ( 13) 00:09:24.218 17241.009 - 17341.834: 97.7441% ( 11) 00:09:24.218 17341.834 - 17442.658: 97.8711% ( 13) 00:09:24.218 17442.658 - 17543.483: 97.9785% ( 11) 00:09:24.218 17543.483 - 17644.308: 98.0859% ( 11) 00:09:24.218 17644.308 - 17745.132: 98.2031% ( 12) 00:09:24.218 17745.132 - 17845.957: 98.3105% ( 11) 00:09:24.218 17845.957 - 17946.782: 98.4277% ( 12) 00:09:24.218 17946.782 - 18047.606: 98.5449% ( 12) 00:09:24.218 18047.606 - 18148.431: 98.6426% ( 10) 00:09:24.218 18148.431 - 18249.255: 98.6719% ( 3) 00:09:24.218 18249.255 - 18350.080: 98.7012% ( 3) 00:09:24.218 18350.080 - 18450.905: 98.7305% ( 3) 00:09:24.218 18450.905 - 18551.729: 98.7500% ( 2) 00:09:24.218 20769.871 - 20870.695: 98.7695% ( 2) 00:09:24.218 20870.695 - 20971.520: 98.8086% ( 4) 00:09:24.218 20971.520 - 21072.345: 98.8477% ( 4) 00:09:24.218 21072.345 - 21173.169: 98.9062% ( 6) 00:09:24.218 21173.169 - 21273.994: 98.9258% ( 2) 00:09:24.218 21273.994 - 21374.818: 98.9746% ( 5) 00:09:24.218 21374.818 - 21475.643: 99.0137% ( 4) 00:09:24.218 21475.643 - 21576.468: 99.0625% ( 5) 00:09:24.218 21576.468 - 21677.292: 99.1016% ( 4) 00:09:24.218 21677.292 - 21778.117: 99.1406% ( 4) 00:09:24.218 21778.117 - 21878.942: 99.1895% ( 5) 00:09:24.218 21878.942 - 21979.766: 99.2285% ( 4) 00:09:24.218 21979.766 - 22080.591: 99.2773% ( 5) 00:09:24.218 22080.591 - 22181.415: 99.3164% ( 4) 00:09:24.218 22181.415 - 22282.240: 99.3652% ( 5) 00:09:24.218 22282.240 - 22383.065: 99.4141% ( 5) 00:09:24.218 22383.065 - 22483.889: 99.4531% ( 4) 00:09:24.218 22483.889 - 22584.714: 99.4922% ( 4) 00:09:24.218 22584.714 - 22685.538: 99.5410% ( 5) 00:09:24.218 22685.538 - 22786.363: 99.5801% ( 4) 00:09:24.218 22786.363 - 22887.188: 99.6094% ( 3) 00:09:24.218 22887.188 - 22988.012: 99.6582% ( 5) 00:09:24.219 22988.012 - 23088.837: 99.6973% ( 4) 00:09:24.219 23088.837 - 23189.662: 99.7363% ( 4) 00:09:24.219 23189.662 - 23290.486: 99.7852% ( 5) 00:09:24.219 23290.486 - 23391.311: 99.8340% ( 5) 00:09:24.219 23391.311 - 23492.135: 99.8730% ( 4) 00:09:24.219 23492.135 - 23592.960: 99.9121% ( 4) 00:09:24.219 23592.960 - 23693.785: 99.9609% ( 5) 00:09:24.219 23693.785 - 23794.609: 99.9902% ( 3) 00:09:24.219 23794.609 - 23895.434: 100.0000% ( 1) 00:09:24.219 00:09:24.219 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:24.219 ============================================================================== 00:09:24.219 Range in us Cumulative IO count 00:09:24.219 8670.917 - 8721.329: 0.0098% ( 1) 00:09:24.219 8721.329 - 8771.742: 0.0293% ( 2) 00:09:24.219 8771.742 - 8822.154: 0.0586% ( 3) 00:09:24.219 8822.154 - 8872.566: 0.0879% ( 3) 00:09:24.219 8872.566 - 8922.978: 0.1660% ( 8) 00:09:24.219 8922.978 - 8973.391: 0.1855% ( 2) 00:09:24.219 8973.391 - 9023.803: 0.2930% ( 11) 00:09:24.219 9023.803 - 9074.215: 0.3320% ( 4) 00:09:24.219 9074.215 - 9124.628: 0.4199% ( 9) 00:09:24.219 9124.628 - 9175.040: 0.4688% ( 5) 00:09:24.219 9175.040 - 9225.452: 0.5273% ( 6) 00:09:24.219 9225.452 - 9275.865: 0.5762% ( 5) 00:09:24.219 9275.865 - 9326.277: 0.6055% ( 3) 00:09:24.219 9326.277 - 9376.689: 0.8691% ( 27) 00:09:24.219 9376.689 - 9427.102: 0.9766% ( 11) 00:09:24.219 9427.102 - 9477.514: 1.1035% ( 13) 00:09:24.219 9477.514 - 9527.926: 1.3184% ( 22) 00:09:24.219 9527.926 - 9578.338: 1.5137% ( 20) 00:09:24.219 9578.338 - 9628.751: 1.7383% ( 23) 00:09:24.219 9628.751 - 9679.163: 2.0020% ( 27) 00:09:24.219 9679.163 - 9729.575: 2.2461% ( 25) 00:09:24.219 9729.575 - 9779.988: 2.5684% ( 33) 00:09:24.219 9779.988 - 9830.400: 2.9590% ( 40) 00:09:24.219 9830.400 - 9880.812: 3.2812% ( 33) 00:09:24.219 9880.812 - 9931.225: 3.7500% ( 48) 00:09:24.219 9931.225 - 9981.637: 4.2871% ( 55) 00:09:24.219 9981.637 - 10032.049: 4.7754% ( 50) 00:09:24.219 10032.049 - 10082.462: 5.0977% ( 33) 00:09:24.219 10082.462 - 10132.874: 5.7910% ( 71) 00:09:24.219 10132.874 - 10183.286: 6.3281% ( 55) 00:09:24.219 10183.286 - 10233.698: 6.9336% ( 62) 00:09:24.219 10233.698 - 10284.111: 7.7051% ( 79) 00:09:24.219 10284.111 - 10334.523: 8.5547% ( 87) 00:09:24.219 10334.523 - 10384.935: 9.3359% ( 80) 00:09:24.219 10384.935 - 10435.348: 9.9609% ( 64) 00:09:24.219 10435.348 - 10485.760: 10.9082% ( 97) 00:09:24.219 10485.760 - 10536.172: 11.7188% ( 83) 00:09:24.219 10536.172 - 10586.585: 12.5879% ( 89) 00:09:24.219 10586.585 - 10636.997: 13.3105% ( 74) 00:09:24.219 10636.997 - 10687.409: 14.0137% ( 72) 00:09:24.219 10687.409 - 10737.822: 14.9121% ( 92) 00:09:24.219 10737.822 - 10788.234: 15.7129% ( 82) 00:09:24.219 10788.234 - 10838.646: 16.6406% ( 95) 00:09:24.219 10838.646 - 10889.058: 17.7148% ( 110) 00:09:24.219 10889.058 - 10939.471: 18.6523% ( 96) 00:09:24.219 10939.471 - 10989.883: 19.8242% ( 120) 00:09:24.219 10989.883 - 11040.295: 20.8789% ( 108) 00:09:24.219 11040.295 - 11090.708: 21.9824% ( 113) 00:09:24.219 11090.708 - 11141.120: 23.1836% ( 123) 00:09:24.219 11141.120 - 11191.532: 24.2090% ( 105) 00:09:24.219 11191.532 - 11241.945: 25.3516% ( 117) 00:09:24.219 11241.945 - 11292.357: 26.6406% ( 132) 00:09:24.219 11292.357 - 11342.769: 27.8809% ( 127) 00:09:24.219 11342.769 - 11393.182: 28.9355% ( 108) 00:09:24.219 11393.182 - 11443.594: 30.0977% ( 119) 00:09:24.219 11443.594 - 11494.006: 31.2988% ( 123) 00:09:24.219 11494.006 - 11544.418: 32.5586% ( 129) 00:09:24.219 11544.418 - 11594.831: 33.6621% ( 113) 00:09:24.219 11594.831 - 11645.243: 34.9707% ( 134) 00:09:24.219 11645.243 - 11695.655: 36.2012% ( 126) 00:09:24.219 11695.655 - 11746.068: 37.4121% ( 124) 00:09:24.219 11746.068 - 11796.480: 38.5840% ( 120) 00:09:24.219 11796.480 - 11846.892: 39.7656% ( 121) 00:09:24.219 11846.892 - 11897.305: 41.2598% ( 153) 00:09:24.219 11897.305 - 11947.717: 42.5195% ( 129) 00:09:24.219 11947.717 - 11998.129: 43.6523% ( 116) 00:09:24.219 11998.129 - 12048.542: 44.8340% ( 121) 00:09:24.219 12048.542 - 12098.954: 46.0938% ( 129) 00:09:24.219 12098.954 - 12149.366: 47.2266% ( 116) 00:09:24.219 12149.366 - 12199.778: 48.3984% ( 120) 00:09:24.219 12199.778 - 12250.191: 49.7461% ( 138) 00:09:24.219 12250.191 - 12300.603: 50.8789% ( 116) 00:09:24.219 12300.603 - 12351.015: 51.9336% ( 108) 00:09:24.219 12351.015 - 12401.428: 53.0371% ( 113) 00:09:24.219 12401.428 - 12451.840: 54.1309% ( 112) 00:09:24.219 12451.840 - 12502.252: 55.0684% ( 96) 00:09:24.219 12502.252 - 12552.665: 56.1426% ( 110) 00:09:24.219 12552.665 - 12603.077: 57.1191% ( 100) 00:09:24.219 12603.077 - 12653.489: 57.9980% ( 90) 00:09:24.219 12653.489 - 12703.902: 58.8477% ( 87) 00:09:24.219 12703.902 - 12754.314: 59.7461% ( 92) 00:09:24.219 12754.314 - 12804.726: 60.9668% ( 125) 00:09:24.219 12804.726 - 12855.138: 61.9824% ( 104) 00:09:24.219 12855.138 - 12905.551: 62.8516% ( 89) 00:09:24.219 12905.551 - 13006.375: 64.4922% ( 168) 00:09:24.219 13006.375 - 13107.200: 66.2402% ( 179) 00:09:24.219 13107.200 - 13208.025: 67.7930% ( 159) 00:09:24.219 13208.025 - 13308.849: 69.3652% ( 161) 00:09:24.219 13308.849 - 13409.674: 71.1133% ( 179) 00:09:24.219 13409.674 - 13510.498: 72.4609% ( 138) 00:09:24.219 13510.498 - 13611.323: 73.9551% ( 153) 00:09:24.219 13611.323 - 13712.148: 75.3516% ( 143) 00:09:24.219 13712.148 - 13812.972: 76.6699% ( 135) 00:09:24.219 13812.972 - 13913.797: 77.8809% ( 124) 00:09:24.219 13913.797 - 14014.622: 79.1406% ( 129) 00:09:24.219 14014.622 - 14115.446: 80.3906% ( 128) 00:09:24.219 14115.446 - 14216.271: 81.4941% ( 113) 00:09:24.219 14216.271 - 14317.095: 82.2070% ( 73) 00:09:24.219 14317.095 - 14417.920: 83.2715% ( 109) 00:09:24.219 14417.920 - 14518.745: 84.1309% ( 88) 00:09:24.219 14518.745 - 14619.569: 85.0195% ( 91) 00:09:24.219 14619.569 - 14720.394: 85.7910% ( 79) 00:09:24.219 14720.394 - 14821.218: 86.4453% ( 67) 00:09:24.219 14821.218 - 14922.043: 87.2363% ( 81) 00:09:24.219 14922.043 - 15022.868: 87.8613% ( 64) 00:09:24.219 15022.868 - 15123.692: 88.5352% ( 69) 00:09:24.219 15123.692 - 15224.517: 89.2383% ( 72) 00:09:24.219 15224.517 - 15325.342: 89.8535% ( 63) 00:09:24.219 15325.342 - 15426.166: 90.5176% ( 68) 00:09:24.219 15426.166 - 15526.991: 91.0059% ( 50) 00:09:24.219 15526.991 - 15627.815: 91.5820% ( 59) 00:09:24.219 15627.815 - 15728.640: 92.1680% ( 60) 00:09:24.219 15728.640 - 15829.465: 92.6855% ( 53) 00:09:24.219 15829.465 - 15930.289: 93.2324% ( 56) 00:09:24.219 15930.289 - 16031.114: 93.5742% ( 35) 00:09:24.219 16031.114 - 16131.938: 93.9746% ( 41) 00:09:24.219 16131.938 - 16232.763: 94.3164% ( 35) 00:09:24.219 16232.763 - 16333.588: 94.6973% ( 39) 00:09:24.219 16333.588 - 16434.412: 95.1367% ( 45) 00:09:24.219 16434.412 - 16535.237: 95.4883% ( 36) 00:09:24.219 16535.237 - 16636.062: 95.8496% ( 37) 00:09:24.219 16636.062 - 16736.886: 96.1719% ( 33) 00:09:24.219 16736.886 - 16837.711: 96.4062% ( 24) 00:09:24.219 16837.711 - 16938.535: 96.7383% ( 34) 00:09:24.219 16938.535 - 17039.360: 96.9629% ( 23) 00:09:24.219 17039.360 - 17140.185: 97.1484% ( 19) 00:09:24.219 17140.185 - 17241.009: 97.3438% ( 20) 00:09:24.219 17241.009 - 17341.834: 97.4707% ( 13) 00:09:24.219 17341.834 - 17442.658: 97.6758% ( 21) 00:09:24.219 17442.658 - 17543.483: 97.7637% ( 9) 00:09:24.219 17543.483 - 17644.308: 97.9395% ( 18) 00:09:24.219 17644.308 - 17745.132: 98.0859% ( 15) 00:09:24.219 17745.132 - 17845.957: 98.2617% ( 18) 00:09:24.219 17845.957 - 17946.782: 98.3887% ( 13) 00:09:24.219 17946.782 - 18047.606: 98.4473% ( 6) 00:09:24.219 18047.606 - 18148.431: 98.4863% ( 4) 00:09:24.219 18249.255 - 18350.080: 98.5059% ( 2) 00:09:24.219 18350.080 - 18450.905: 98.7402% ( 24) 00:09:24.219 18450.905 - 18551.729: 98.7500% ( 1) 00:09:24.219 21576.468 - 21677.292: 98.7598% ( 1) 00:09:24.219 21677.292 - 21778.117: 98.7891% ( 3) 00:09:24.219 21778.117 - 21878.942: 98.8184% ( 3) 00:09:24.219 21878.942 - 21979.766: 98.8672% ( 5) 00:09:24.219 21979.766 - 22080.591: 98.8770% ( 1) 00:09:24.219 22080.591 - 22181.415: 98.9258% ( 5) 00:09:24.219 22181.415 - 22282.240: 98.9648% ( 4) 00:09:24.219 22282.240 - 22383.065: 98.9844% ( 2) 00:09:24.219 22383.065 - 22483.889: 99.0430% ( 6) 00:09:24.219 22483.889 - 22584.714: 99.0723% ( 3) 00:09:24.219 22584.714 - 22685.538: 99.1113% ( 4) 00:09:24.219 22685.538 - 22786.363: 99.1406% ( 3) 00:09:24.219 22786.363 - 22887.188: 99.1895% ( 5) 00:09:24.219 22887.188 - 22988.012: 99.2188% ( 3) 00:09:24.219 22988.012 - 23088.837: 99.2578% ( 4) 00:09:24.219 23088.837 - 23189.662: 99.2969% ( 4) 00:09:24.219 23189.662 - 23290.486: 99.3359% ( 4) 00:09:24.219 23290.486 - 23391.311: 99.3652% ( 3) 00:09:24.219 23391.311 - 23492.135: 99.3945% ( 3) 00:09:24.219 23492.135 - 23592.960: 99.4336% ( 4) 00:09:24.219 23592.960 - 23693.785: 99.4629% ( 3) 00:09:24.219 23693.785 - 23794.609: 99.5117% ( 5) 00:09:24.219 23794.609 - 23895.434: 99.5312% ( 2) 00:09:24.219 23895.434 - 23996.258: 99.5898% ( 6) 00:09:24.219 23996.258 - 24097.083: 99.6094% ( 2) 00:09:24.219 24097.083 - 24197.908: 99.6387% ( 3) 00:09:24.219 24197.908 - 24298.732: 99.6777% ( 4) 00:09:24.219 24298.732 - 24399.557: 99.7168% ( 4) 00:09:24.219 24399.557 - 24500.382: 99.7656% ( 5) 00:09:24.219 24500.382 - 24601.206: 99.7949% ( 3) 00:09:24.219 24601.206 - 24702.031: 99.8340% ( 4) 00:09:24.219 24702.031 - 24802.855: 99.8730% ( 4) 00:09:24.219 24802.855 - 24903.680: 99.9023% ( 3) 00:09:24.219 24903.680 - 25004.505: 99.9316% ( 3) 00:09:24.219 25004.505 - 25105.329: 99.9609% ( 3) 00:09:24.219 25105.329 - 25206.154: 100.0000% ( 4) 00:09:24.219 00:09:24.219 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:24.219 ============================================================================== 00:09:24.219 Range in us Cumulative IO count 00:09:24.219 9023.803 - 9074.215: 0.0781% ( 8) 00:09:24.219 9074.215 - 9124.628: 0.1758% ( 10) 00:09:24.219 9124.628 - 9175.040: 0.2344% ( 6) 00:09:24.219 9175.040 - 9225.452: 0.2930% ( 6) 00:09:24.219 9225.452 - 9275.865: 0.3613% ( 7) 00:09:24.219 9275.865 - 9326.277: 0.4492% ( 9) 00:09:24.219 9326.277 - 9376.689: 0.5566% ( 11) 00:09:24.219 9376.689 - 9427.102: 0.6250% ( 7) 00:09:24.219 9427.102 - 9477.514: 0.7520% ( 13) 00:09:24.219 9477.514 - 9527.926: 0.8691% ( 12) 00:09:24.219 9527.926 - 9578.338: 1.0254% ( 16) 00:09:24.219 9578.338 - 9628.751: 1.1719% ( 15) 00:09:24.219 9628.751 - 9679.163: 1.3281% ( 16) 00:09:24.219 9679.163 - 9729.575: 1.4844% ( 16) 00:09:24.219 9729.575 - 9779.988: 1.7090% ( 23) 00:09:24.219 9779.988 - 9830.400: 1.9043% ( 20) 00:09:24.219 9830.400 - 9880.812: 2.1582% ( 26) 00:09:24.219 9880.812 - 9931.225: 2.4609% ( 31) 00:09:24.219 9931.225 - 9981.637: 2.7637% ( 31) 00:09:24.219 9981.637 - 10032.049: 3.0957% ( 34) 00:09:24.219 10032.049 - 10082.462: 3.4961% ( 41) 00:09:24.219 10082.462 - 10132.874: 3.9160% ( 43) 00:09:24.219 10132.874 - 10183.286: 4.4336% ( 53) 00:09:24.219 10183.286 - 10233.698: 5.0195% ( 60) 00:09:24.220 10233.698 - 10284.111: 5.6445% ( 64) 00:09:24.220 10284.111 - 10334.523: 6.5430% ( 92) 00:09:24.220 10334.523 - 10384.935: 7.2461% ( 72) 00:09:24.220 10384.935 - 10435.348: 8.0371% ( 81) 00:09:24.220 10435.348 - 10485.760: 8.8867% ( 87) 00:09:24.220 10485.760 - 10536.172: 9.8828% ( 102) 00:09:24.220 10536.172 - 10586.585: 10.8691% ( 101) 00:09:24.220 10586.585 - 10636.997: 11.8750% ( 103) 00:09:24.220 10636.997 - 10687.409: 12.8418% ( 99) 00:09:24.220 10687.409 - 10737.822: 13.8867% ( 107) 00:09:24.220 10737.822 - 10788.234: 14.8926% ( 103) 00:09:24.220 10788.234 - 10838.646: 15.9863% ( 112) 00:09:24.220 10838.646 - 10889.058: 17.0898% ( 113) 00:09:24.220 10889.058 - 10939.471: 18.1445% ( 108) 00:09:24.220 10939.471 - 10989.883: 19.2480% ( 113) 00:09:24.220 10989.883 - 11040.295: 20.4102% ( 119) 00:09:24.220 11040.295 - 11090.708: 21.4746% ( 109) 00:09:24.220 11090.708 - 11141.120: 22.5977% ( 115) 00:09:24.220 11141.120 - 11191.532: 23.8184% ( 125) 00:09:24.220 11191.532 - 11241.945: 25.0684% ( 128) 00:09:24.220 11241.945 - 11292.357: 26.4160% ( 138) 00:09:24.220 11292.357 - 11342.769: 27.6758% ( 129) 00:09:24.220 11342.769 - 11393.182: 29.0723% ( 143) 00:09:24.220 11393.182 - 11443.594: 30.4102% ( 137) 00:09:24.220 11443.594 - 11494.006: 31.6699% ( 129) 00:09:24.220 11494.006 - 11544.418: 32.9688% ( 133) 00:09:24.220 11544.418 - 11594.831: 34.3262% ( 139) 00:09:24.220 11594.831 - 11645.243: 35.6738% ( 138) 00:09:24.220 11645.243 - 11695.655: 37.0605% ( 142) 00:09:24.220 11695.655 - 11746.068: 38.4082% ( 138) 00:09:24.220 11746.068 - 11796.480: 39.7949% ( 142) 00:09:24.220 11796.480 - 11846.892: 41.0547% ( 129) 00:09:24.220 11846.892 - 11897.305: 42.3438% ( 132) 00:09:24.220 11897.305 - 11947.717: 43.6230% ( 131) 00:09:24.220 11947.717 - 11998.129: 44.8438% ( 125) 00:09:24.220 11998.129 - 12048.542: 46.0645% ( 125) 00:09:24.220 12048.542 - 12098.954: 47.3145% ( 128) 00:09:24.220 12098.954 - 12149.366: 48.4961% ( 121) 00:09:24.220 12149.366 - 12199.778: 49.6680% ( 120) 00:09:24.220 12199.778 - 12250.191: 50.7812% ( 114) 00:09:24.220 12250.191 - 12300.603: 51.8750% ( 112) 00:09:24.220 12300.603 - 12351.015: 52.9590% ( 111) 00:09:24.220 12351.015 - 12401.428: 53.9551% ( 102) 00:09:24.220 12401.428 - 12451.840: 54.9609% ( 103) 00:09:24.220 12451.840 - 12502.252: 55.9277% ( 99) 00:09:24.220 12502.252 - 12552.665: 57.0312% ( 113) 00:09:24.220 12552.665 - 12603.077: 58.0371% ( 103) 00:09:24.220 12603.077 - 12653.489: 59.1016% ( 109) 00:09:24.220 12653.489 - 12703.902: 60.0977% ( 102) 00:09:24.220 12703.902 - 12754.314: 61.1133% ( 104) 00:09:24.220 12754.314 - 12804.726: 62.0312% ( 94) 00:09:24.220 12804.726 - 12855.138: 63.0273% ( 102) 00:09:24.220 12855.138 - 12905.551: 63.9746% ( 97) 00:09:24.220 12905.551 - 13006.375: 65.7617% ( 183) 00:09:24.220 13006.375 - 13107.200: 67.4609% ( 174) 00:09:24.220 13107.200 - 13208.025: 69.0820% ( 166) 00:09:24.220 13208.025 - 13308.849: 70.6348% ( 159) 00:09:24.220 13308.849 - 13409.674: 72.1875% ( 159) 00:09:24.220 13409.674 - 13510.498: 73.8086% ( 166) 00:09:24.220 13510.498 - 13611.323: 75.3320% ( 156) 00:09:24.220 13611.323 - 13712.148: 76.7090% ( 141) 00:09:24.220 13712.148 - 13812.972: 77.9004% ( 122) 00:09:24.220 13812.972 - 13913.797: 79.0625% ( 119) 00:09:24.220 13913.797 - 14014.622: 80.1562% ( 112) 00:09:24.220 14014.622 - 14115.446: 81.2109% ( 108) 00:09:24.220 14115.446 - 14216.271: 82.1387% ( 95) 00:09:24.220 14216.271 - 14317.095: 83.0566% ( 94) 00:09:24.220 14317.095 - 14417.920: 83.8867% ( 85) 00:09:24.220 14417.920 - 14518.745: 84.8730% ( 101) 00:09:24.220 14518.745 - 14619.569: 85.7129% ( 86) 00:09:24.220 14619.569 - 14720.394: 86.5527% ( 86) 00:09:24.220 14720.394 - 14821.218: 87.3340% ( 80) 00:09:24.220 14821.218 - 14922.043: 88.0566% ( 74) 00:09:24.220 14922.043 - 15022.868: 88.7793% ( 74) 00:09:24.220 15022.868 - 15123.692: 89.5312% ( 77) 00:09:24.220 15123.692 - 15224.517: 90.1758% ( 66) 00:09:24.220 15224.517 - 15325.342: 90.7227% ( 56) 00:09:24.220 15325.342 - 15426.166: 91.1426% ( 43) 00:09:24.220 15426.166 - 15526.991: 91.5234% ( 39) 00:09:24.220 15526.991 - 15627.815: 91.9824% ( 47) 00:09:24.220 15627.815 - 15728.640: 92.4219% ( 45) 00:09:24.220 15728.640 - 15829.465: 92.8516% ( 44) 00:09:24.220 15829.465 - 15930.289: 93.2812% ( 44) 00:09:24.220 15930.289 - 16031.114: 93.7207% ( 45) 00:09:24.220 16031.114 - 16131.938: 94.1406% ( 43) 00:09:24.220 16131.938 - 16232.763: 94.5605% ( 43) 00:09:24.220 16232.763 - 16333.588: 94.9512% ( 40) 00:09:24.220 16333.588 - 16434.412: 95.2930% ( 35) 00:09:24.220 16434.412 - 16535.237: 95.5566% ( 27) 00:09:24.220 16535.237 - 16636.062: 95.8008% ( 25) 00:09:24.220 16636.062 - 16736.886: 96.0742% ( 28) 00:09:24.220 16736.886 - 16837.711: 96.3574% ( 29) 00:09:24.220 16837.711 - 16938.535: 96.6406% ( 29) 00:09:24.220 16938.535 - 17039.360: 96.8848% ( 25) 00:09:24.220 17039.360 - 17140.185: 97.0703% ( 19) 00:09:24.220 17140.185 - 17241.009: 97.2461% ( 18) 00:09:24.220 17241.009 - 17341.834: 97.4121% ( 17) 00:09:24.220 17341.834 - 17442.658: 97.5098% ( 10) 00:09:24.220 17442.658 - 17543.483: 97.6172% ( 11) 00:09:24.220 17543.483 - 17644.308: 97.7344% ( 12) 00:09:24.220 17644.308 - 17745.132: 97.8711% ( 14) 00:09:24.220 17745.132 - 17845.957: 98.0078% ( 14) 00:09:24.220 17845.957 - 17946.782: 98.1152% ( 11) 00:09:24.220 17946.782 - 18047.606: 98.1836% ( 7) 00:09:24.220 18047.606 - 18148.431: 98.2422% ( 6) 00:09:24.220 18148.431 - 18249.255: 98.3105% ( 7) 00:09:24.220 18249.255 - 18350.080: 98.3789% ( 7) 00:09:24.220 18350.080 - 18450.905: 98.4473% ( 7) 00:09:24.220 18450.905 - 18551.729: 98.5156% ( 7) 00:09:24.220 18551.729 - 18652.554: 98.5840% ( 7) 00:09:24.220 18652.554 - 18753.378: 98.6426% ( 6) 00:09:24.220 18753.378 - 18854.203: 98.7012% ( 6) 00:09:24.220 18854.203 - 18955.028: 98.7500% ( 5) 00:09:24.220 22887.188 - 22988.012: 98.7695% ( 2) 00:09:24.220 22988.012 - 23088.837: 98.8086% ( 4) 00:09:24.220 23088.837 - 23189.662: 98.8477% ( 4) 00:09:24.220 23189.662 - 23290.486: 98.8965% ( 5) 00:09:24.220 23290.486 - 23391.311: 98.9355% ( 4) 00:09:24.220 23391.311 - 23492.135: 98.9746% ( 4) 00:09:24.220 23492.135 - 23592.960: 99.0137% ( 4) 00:09:24.220 23592.960 - 23693.785: 99.0527% ( 4) 00:09:24.220 23693.785 - 23794.609: 99.1016% ( 5) 00:09:24.220 23794.609 - 23895.434: 99.1406% ( 4) 00:09:24.220 23895.434 - 23996.258: 99.1797% ( 4) 00:09:24.220 23996.258 - 24097.083: 99.2188% ( 4) 00:09:24.220 24097.083 - 24197.908: 99.2676% ( 5) 00:09:24.220 24197.908 - 24298.732: 99.3066% ( 4) 00:09:24.220 24298.732 - 24399.557: 99.3359% ( 3) 00:09:24.220 24399.557 - 24500.382: 99.3848% ( 5) 00:09:24.220 24500.382 - 24601.206: 99.4238% ( 4) 00:09:24.220 24601.206 - 24702.031: 99.4629% ( 4) 00:09:24.220 24702.031 - 24802.855: 99.5020% ( 4) 00:09:24.220 24802.855 - 24903.680: 99.5410% ( 4) 00:09:24.220 24903.680 - 25004.505: 99.5898% ( 5) 00:09:24.220 25004.505 - 25105.329: 99.6289% ( 4) 00:09:24.220 25105.329 - 25206.154: 99.6777% ( 5) 00:09:24.220 25206.154 - 25306.978: 99.7168% ( 4) 00:09:24.220 25306.978 - 25407.803: 99.7559% ( 4) 00:09:24.220 25407.803 - 25508.628: 99.7949% ( 4) 00:09:24.220 25508.628 - 25609.452: 99.8438% ( 5) 00:09:24.220 25609.452 - 25710.277: 99.8828% ( 4) 00:09:24.220 25710.277 - 25811.102: 99.9219% ( 4) 00:09:24.220 25811.102 - 26012.751: 100.0000% ( 8) 00:09:24.220 00:09:24.220 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:24.220 ============================================================================== 00:09:24.220 Range in us Cumulative IO count 00:09:24.220 9023.803 - 9074.215: 0.0293% ( 3) 00:09:24.220 9074.215 - 9124.628: 0.0684% ( 4) 00:09:24.220 9124.628 - 9175.040: 0.0879% ( 2) 00:09:24.220 9225.452 - 9275.865: 0.1660% ( 8) 00:09:24.220 9275.865 - 9326.277: 0.2051% ( 4) 00:09:24.220 9326.277 - 9376.689: 0.2832% ( 8) 00:09:24.220 9376.689 - 9427.102: 0.3809% ( 10) 00:09:24.220 9427.102 - 9477.514: 0.5176% ( 14) 00:09:24.220 9477.514 - 9527.926: 0.6934% ( 18) 00:09:24.220 9527.926 - 9578.338: 0.8594% ( 17) 00:09:24.220 9578.338 - 9628.751: 1.0352% ( 18) 00:09:24.220 9628.751 - 9679.163: 1.2500% ( 22) 00:09:24.220 9679.163 - 9729.575: 1.4453% ( 20) 00:09:24.220 9729.575 - 9779.988: 1.6699% ( 23) 00:09:24.220 9779.988 - 9830.400: 1.9043% ( 24) 00:09:24.220 9830.400 - 9880.812: 2.2168% ( 32) 00:09:24.220 9880.812 - 9931.225: 2.5488% ( 34) 00:09:24.220 9931.225 - 9981.637: 2.8711% ( 33) 00:09:24.220 9981.637 - 10032.049: 3.2227% ( 36) 00:09:24.220 10032.049 - 10082.462: 3.5547% ( 34) 00:09:24.220 10082.462 - 10132.874: 3.9844% ( 44) 00:09:24.220 10132.874 - 10183.286: 4.4629% ( 49) 00:09:24.220 10183.286 - 10233.698: 4.9414% ( 49) 00:09:24.220 10233.698 - 10284.111: 5.4492% ( 52) 00:09:24.220 10284.111 - 10334.523: 6.0352% ( 60) 00:09:24.220 10334.523 - 10384.935: 6.6602% ( 64) 00:09:24.220 10384.935 - 10435.348: 7.2852% ( 64) 00:09:24.220 10435.348 - 10485.760: 8.0566% ( 79) 00:09:24.220 10485.760 - 10536.172: 8.8086% ( 77) 00:09:24.220 10536.172 - 10586.585: 9.5410% ( 75) 00:09:24.220 10586.585 - 10636.997: 10.4297% ( 91) 00:09:24.220 10636.997 - 10687.409: 11.3770% ( 97) 00:09:24.220 10687.409 - 10737.822: 12.5977% ( 125) 00:09:24.220 10737.822 - 10788.234: 13.6914% ( 112) 00:09:24.220 10788.234 - 10838.646: 14.7656% ( 110) 00:09:24.220 10838.646 - 10889.058: 15.9082% ( 117) 00:09:24.220 10889.058 - 10939.471: 17.0996% ( 122) 00:09:24.220 10939.471 - 10989.883: 18.3887% ( 132) 00:09:24.220 10989.883 - 11040.295: 19.6777% ( 132) 00:09:24.220 11040.295 - 11090.708: 20.8496% ( 120) 00:09:24.220 11090.708 - 11141.120: 22.1582% ( 134) 00:09:24.220 11141.120 - 11191.532: 23.4863% ( 136) 00:09:24.220 11191.532 - 11241.945: 24.8242% ( 137) 00:09:24.220 11241.945 - 11292.357: 26.1719% ( 138) 00:09:24.220 11292.357 - 11342.769: 27.5098% ( 137) 00:09:24.220 11342.769 - 11393.182: 28.8574% ( 138) 00:09:24.220 11393.182 - 11443.594: 30.1660% ( 134) 00:09:24.220 11443.594 - 11494.006: 31.4844% ( 135) 00:09:24.220 11494.006 - 11544.418: 32.7539% ( 130) 00:09:24.220 11544.418 - 11594.831: 34.0527% ( 133) 00:09:24.220 11594.831 - 11645.243: 35.4004% ( 138) 00:09:24.220 11645.243 - 11695.655: 36.6895% ( 132) 00:09:24.220 11695.655 - 11746.068: 37.9688% ( 131) 00:09:24.220 11746.068 - 11796.480: 39.2188% ( 128) 00:09:24.220 11796.480 - 11846.892: 40.4688% ( 128) 00:09:24.220 11846.892 - 11897.305: 41.6699% ( 123) 00:09:24.220 11897.305 - 11947.717: 42.9688% ( 133) 00:09:24.220 11947.717 - 11998.129: 44.2383% ( 130) 00:09:24.220 11998.129 - 12048.542: 45.4785% ( 127) 00:09:24.220 12048.542 - 12098.954: 46.7383% ( 129) 00:09:24.220 12098.954 - 12149.366: 47.9883% ( 128) 00:09:24.220 12149.366 - 12199.778: 49.1602% ( 120) 00:09:24.220 12199.778 - 12250.191: 50.1855% ( 105) 00:09:24.220 12250.191 - 12300.603: 51.2402% ( 108) 00:09:24.220 12300.603 - 12351.015: 52.2656% ( 105) 00:09:24.220 12351.015 - 12401.428: 53.2227% ( 98) 00:09:24.220 12401.428 - 12451.840: 54.2383% ( 104) 00:09:24.220 12451.840 - 12502.252: 55.1855% ( 97) 00:09:24.221 12502.252 - 12552.665: 56.1426% ( 98) 00:09:24.221 12552.665 - 12603.077: 57.1191% ( 100) 00:09:24.221 12603.077 - 12653.489: 58.0371% ( 94) 00:09:24.221 12653.489 - 12703.902: 59.0527% ( 104) 00:09:24.221 12703.902 - 12754.314: 60.0000% ( 97) 00:09:24.221 12754.314 - 12804.726: 60.9570% ( 98) 00:09:24.221 12804.726 - 12855.138: 61.8750% ( 94) 00:09:24.221 12855.138 - 12905.551: 62.8027% ( 95) 00:09:24.221 12905.551 - 13006.375: 64.5801% ( 182) 00:09:24.221 13006.375 - 13107.200: 66.2891% ( 175) 00:09:24.221 13107.200 - 13208.025: 67.9297% ( 168) 00:09:24.221 13208.025 - 13308.849: 69.6484% ( 176) 00:09:24.221 13308.849 - 13409.674: 71.4844% ( 188) 00:09:24.221 13409.674 - 13510.498: 73.2715% ( 183) 00:09:24.221 13510.498 - 13611.323: 75.1074% ( 188) 00:09:24.221 13611.323 - 13712.148: 76.9336% ( 187) 00:09:24.221 13712.148 - 13812.972: 78.6816% ( 179) 00:09:24.221 13812.972 - 13913.797: 80.2051% ( 156) 00:09:24.221 13913.797 - 14014.622: 81.6797% ( 151) 00:09:24.221 14014.622 - 14115.446: 83.1152% ( 147) 00:09:24.221 14115.446 - 14216.271: 84.5020% ( 142) 00:09:24.221 14216.271 - 14317.095: 85.8203% ( 135) 00:09:24.221 14317.095 - 14417.920: 87.0117% ( 122) 00:09:24.221 14417.920 - 14518.745: 88.0859% ( 110) 00:09:24.221 14518.745 - 14619.569: 89.0039% ( 94) 00:09:24.221 14619.569 - 14720.394: 89.8438% ( 86) 00:09:24.221 14720.394 - 14821.218: 90.6055% ( 78) 00:09:24.221 14821.218 - 14922.043: 91.3379% ( 75) 00:09:24.221 14922.043 - 15022.868: 91.9434% ( 62) 00:09:24.221 15022.868 - 15123.692: 92.3828% ( 45) 00:09:24.221 15123.692 - 15224.517: 92.7832% ( 41) 00:09:24.221 15224.517 - 15325.342: 93.2031% ( 43) 00:09:24.221 15325.342 - 15426.166: 93.5059% ( 31) 00:09:24.221 15426.166 - 15526.991: 93.8086% ( 31) 00:09:24.221 15526.991 - 15627.815: 94.0918% ( 29) 00:09:24.221 15627.815 - 15728.640: 94.3848% ( 30) 00:09:24.221 15728.640 - 15829.465: 94.6777% ( 30) 00:09:24.221 15829.465 - 15930.289: 94.9609% ( 29) 00:09:24.221 15930.289 - 16031.114: 95.2246% ( 27) 00:09:24.221 16031.114 - 16131.938: 95.4980% ( 28) 00:09:24.221 16131.938 - 16232.763: 95.7617% ( 27) 00:09:24.221 16232.763 - 16333.588: 96.0059% ( 25) 00:09:24.221 16333.588 - 16434.412: 96.2207% ( 22) 00:09:24.221 16434.412 - 16535.237: 96.4453% ( 23) 00:09:24.221 16535.237 - 16636.062: 96.6504% ( 21) 00:09:24.221 16636.062 - 16736.886: 96.8848% ( 24) 00:09:24.221 16736.886 - 16837.711: 97.0703% ( 19) 00:09:24.221 16837.711 - 16938.535: 97.2363% ( 17) 00:09:24.221 16938.535 - 17039.360: 97.3633% ( 13) 00:09:24.221 17039.360 - 17140.185: 97.4609% ( 10) 00:09:24.221 17140.185 - 17241.009: 97.5000% ( 4) 00:09:24.221 17543.483 - 17644.308: 97.5098% ( 1) 00:09:24.221 17644.308 - 17745.132: 97.5391% ( 3) 00:09:24.221 17745.132 - 17845.957: 97.5684% ( 3) 00:09:24.221 17845.957 - 17946.782: 97.6172% ( 5) 00:09:24.221 17946.782 - 18047.606: 97.6562% ( 4) 00:09:24.221 18047.606 - 18148.431: 97.6953% ( 4) 00:09:24.221 18148.431 - 18249.255: 97.7344% ( 4) 00:09:24.221 18249.255 - 18350.080: 97.7734% ( 4) 00:09:24.221 18350.080 - 18450.905: 97.8320% ( 6) 00:09:24.221 18450.905 - 18551.729: 97.8906% ( 6) 00:09:24.221 18551.729 - 18652.554: 97.9395% ( 5) 00:09:24.221 18652.554 - 18753.378: 97.9883% ( 5) 00:09:24.221 18753.378 - 18854.203: 98.0469% ( 6) 00:09:24.221 18854.203 - 18955.028: 98.1055% ( 6) 00:09:24.221 18955.028 - 19055.852: 98.1641% ( 6) 00:09:24.221 19055.852 - 19156.677: 98.2324% ( 7) 00:09:24.221 19156.677 - 19257.502: 98.2910% ( 6) 00:09:24.221 19257.502 - 19358.326: 98.3496% ( 6) 00:09:24.221 19358.326 - 19459.151: 98.4180% ( 7) 00:09:24.221 19459.151 - 19559.975: 98.4863% ( 7) 00:09:24.221 19559.975 - 19660.800: 98.5449% ( 6) 00:09:24.221 19660.800 - 19761.625: 98.6133% ( 7) 00:09:24.221 19761.625 - 19862.449: 98.6719% ( 6) 00:09:24.221 19862.449 - 19963.274: 98.7402% ( 7) 00:09:24.221 19963.274 - 20064.098: 98.7500% ( 1) 00:09:24.221 24702.031 - 24802.855: 98.7695% ( 2) 00:09:24.221 24802.855 - 24903.680: 98.8086% ( 4) 00:09:24.221 24903.680 - 25004.505: 98.8477% ( 4) 00:09:24.221 25004.505 - 25105.329: 98.8867% ( 4) 00:09:24.221 25105.329 - 25206.154: 98.9355% ( 5) 00:09:24.221 25206.154 - 25306.978: 98.9746% ( 4) 00:09:24.221 25306.978 - 25407.803: 99.0137% ( 4) 00:09:24.221 25407.803 - 25508.628: 99.0625% ( 5) 00:09:24.221 25508.628 - 25609.452: 99.1016% ( 4) 00:09:24.221 25609.452 - 25710.277: 99.1406% ( 4) 00:09:24.221 25710.277 - 25811.102: 99.1797% ( 4) 00:09:24.221 25811.102 - 26012.751: 99.2676% ( 9) 00:09:24.221 26012.751 - 26214.400: 99.3457% ( 8) 00:09:24.221 26214.400 - 26416.049: 99.4336% ( 9) 00:09:24.221 26416.049 - 26617.698: 99.5117% ( 8) 00:09:24.221 26617.698 - 26819.348: 99.5996% ( 9) 00:09:24.221 26819.348 - 27020.997: 99.6875% ( 9) 00:09:24.221 27020.997 - 27222.646: 99.7754% ( 9) 00:09:24.221 27222.646 - 27424.295: 99.8633% ( 9) 00:09:24.221 27424.295 - 27625.945: 99.9414% ( 8) 00:09:24.221 27625.945 - 27827.594: 100.0000% ( 6) 00:09:24.221 00:09:24.221 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:24.221 ============================================================================== 00:09:24.221 Range in us Cumulative IO count 00:09:24.221 8570.092 - 8620.505: 0.0195% ( 2) 00:09:24.221 8620.505 - 8670.917: 0.0488% ( 3) 00:09:24.221 8670.917 - 8721.329: 0.0781% ( 3) 00:09:24.221 8721.329 - 8771.742: 0.1172% ( 4) 00:09:24.221 8771.742 - 8822.154: 0.1465% ( 3) 00:09:24.221 8822.154 - 8872.566: 0.1855% ( 4) 00:09:24.221 8872.566 - 8922.978: 0.2051% ( 2) 00:09:24.221 8922.978 - 8973.391: 0.2441% ( 4) 00:09:24.221 8973.391 - 9023.803: 0.2832% ( 4) 00:09:24.221 9023.803 - 9074.215: 0.3516% ( 7) 00:09:24.221 9074.215 - 9124.628: 0.4199% ( 7) 00:09:24.221 9124.628 - 9175.040: 0.4980% ( 8) 00:09:24.221 9175.040 - 9225.452: 0.5957% ( 10) 00:09:24.221 9225.452 - 9275.865: 0.6836% ( 9) 00:09:24.221 9275.865 - 9326.277: 0.7715% ( 9) 00:09:24.221 9326.277 - 9376.689: 0.8789% ( 11) 00:09:24.221 9376.689 - 9427.102: 0.9961% ( 12) 00:09:24.221 9427.102 - 9477.514: 1.1523% ( 16) 00:09:24.221 9477.514 - 9527.926: 1.2988% ( 15) 00:09:24.221 9527.926 - 9578.338: 1.4355% ( 14) 00:09:24.221 9578.338 - 9628.751: 1.5723% ( 14) 00:09:24.221 9628.751 - 9679.163: 1.7871% ( 22) 00:09:24.221 9679.163 - 9729.575: 2.0801% ( 30) 00:09:24.221 9729.575 - 9779.988: 2.3730% ( 30) 00:09:24.221 9779.988 - 9830.400: 2.6758% ( 31) 00:09:24.221 9830.400 - 9880.812: 2.9883% ( 32) 00:09:24.221 9880.812 - 9931.225: 3.2812% ( 30) 00:09:24.221 9931.225 - 9981.637: 3.6035% ( 33) 00:09:24.221 9981.637 - 10032.049: 3.9746% ( 38) 00:09:24.221 10032.049 - 10082.462: 4.3848% ( 42) 00:09:24.221 10082.462 - 10132.874: 4.8438% ( 47) 00:09:24.221 10132.874 - 10183.286: 5.3613% ( 53) 00:09:24.221 10183.286 - 10233.698: 5.8691% ( 52) 00:09:24.221 10233.698 - 10284.111: 6.4258% ( 57) 00:09:24.221 10284.111 - 10334.523: 7.0703% ( 66) 00:09:24.221 10334.523 - 10384.935: 7.6953% ( 64) 00:09:24.221 10384.935 - 10435.348: 8.4082% ( 73) 00:09:24.221 10435.348 - 10485.760: 9.1211% ( 73) 00:09:24.221 10485.760 - 10536.172: 9.8145% ( 71) 00:09:24.221 10536.172 - 10586.585: 10.6152% ( 82) 00:09:24.221 10586.585 - 10636.997: 11.4355% ( 84) 00:09:24.221 10636.997 - 10687.409: 12.2559% ( 84) 00:09:24.221 10687.409 - 10737.822: 13.0957% ( 86) 00:09:24.221 10737.822 - 10788.234: 13.9453% ( 87) 00:09:24.221 10788.234 - 10838.646: 14.9121% ( 99) 00:09:24.221 10838.646 - 10889.058: 15.8887% ( 100) 00:09:24.221 10889.058 - 10939.471: 16.9141% ( 105) 00:09:24.221 10939.471 - 10989.883: 17.9004% ( 101) 00:09:24.221 10989.883 - 11040.295: 18.9844% ( 111) 00:09:24.221 11040.295 - 11090.708: 20.0488% ( 109) 00:09:24.221 11090.708 - 11141.120: 21.2402% ( 122) 00:09:24.221 11141.120 - 11191.532: 22.4609% ( 125) 00:09:24.221 11191.532 - 11241.945: 23.7012% ( 127) 00:09:24.221 11241.945 - 11292.357: 24.9902% ( 132) 00:09:24.221 11292.357 - 11342.769: 26.2402% ( 128) 00:09:24.221 11342.769 - 11393.182: 27.5488% ( 134) 00:09:24.221 11393.182 - 11443.594: 28.7598% ( 124) 00:09:24.221 11443.594 - 11494.006: 30.0000% ( 127) 00:09:24.221 11494.006 - 11544.418: 31.2402% ( 127) 00:09:24.221 11544.418 - 11594.831: 32.4707% ( 126) 00:09:24.221 11594.831 - 11645.243: 33.7695% ( 133) 00:09:24.221 11645.243 - 11695.655: 35.0000% ( 126) 00:09:24.221 11695.655 - 11746.068: 36.2305% ( 126) 00:09:24.221 11746.068 - 11796.480: 37.6074% ( 141) 00:09:24.221 11796.480 - 11846.892: 38.9160% ( 134) 00:09:24.221 11846.892 - 11897.305: 40.3613% ( 148) 00:09:24.221 11897.305 - 11947.717: 41.7383% ( 141) 00:09:24.221 11947.717 - 11998.129: 43.0176% ( 131) 00:09:24.221 11998.129 - 12048.542: 44.2969% ( 131) 00:09:24.221 12048.542 - 12098.954: 45.5371% ( 127) 00:09:24.221 12098.954 - 12149.366: 46.8164% ( 131) 00:09:24.221 12149.366 - 12199.778: 48.0176% ( 123) 00:09:24.221 12199.778 - 12250.191: 49.2480% ( 126) 00:09:24.221 12250.191 - 12300.603: 50.4102% ( 119) 00:09:24.221 12300.603 - 12351.015: 51.5430% ( 116) 00:09:24.221 12351.015 - 12401.428: 52.7148% ( 120) 00:09:24.221 12401.428 - 12451.840: 53.8672% ( 118) 00:09:24.221 12451.840 - 12502.252: 55.0781% ( 124) 00:09:24.221 12502.252 - 12552.665: 56.1621% ( 111) 00:09:24.221 12552.665 - 12603.077: 57.2656% ( 113) 00:09:24.221 12603.077 - 12653.489: 58.3398% ( 110) 00:09:24.221 12653.489 - 12703.902: 59.4727% ( 116) 00:09:24.221 12703.902 - 12754.314: 60.5371% ( 109) 00:09:24.221 12754.314 - 12804.726: 61.6016% ( 109) 00:09:24.221 12804.726 - 12855.138: 62.6367% ( 106) 00:09:24.221 12855.138 - 12905.551: 63.6621% ( 105) 00:09:24.222 12905.551 - 13006.375: 65.6543% ( 204) 00:09:24.222 13006.375 - 13107.200: 67.3145% ( 170) 00:09:24.222 13107.200 - 13208.025: 68.8965% ( 162) 00:09:24.222 13208.025 - 13308.849: 70.4785% ( 162) 00:09:24.222 13308.849 - 13409.674: 72.0996% ( 166) 00:09:24.222 13409.674 - 13510.498: 73.6719% ( 161) 00:09:24.222 13510.498 - 13611.323: 75.2148% ( 158) 00:09:24.222 13611.323 - 13712.148: 76.6699% ( 149) 00:09:24.222 13712.148 - 13812.972: 78.1445% ( 151) 00:09:24.222 13812.972 - 13913.797: 79.5117% ( 140) 00:09:24.222 13913.797 - 14014.622: 80.8301% ( 135) 00:09:24.222 14014.622 - 14115.446: 82.0898% ( 129) 00:09:24.222 14115.446 - 14216.271: 83.2617% ( 120) 00:09:24.222 14216.271 - 14317.095: 84.4141% ( 118) 00:09:24.222 14317.095 - 14417.920: 85.5176% ( 113) 00:09:24.222 14417.920 - 14518.745: 86.6309% ( 114) 00:09:24.222 14518.745 - 14619.569: 87.6855% ( 108) 00:09:24.222 14619.569 - 14720.394: 88.6133% ( 95) 00:09:24.222 14720.394 - 14821.218: 89.4238% ( 83) 00:09:24.222 14821.218 - 14922.043: 90.1953% ( 79) 00:09:24.222 14922.043 - 15022.868: 90.9961% ( 82) 00:09:24.222 15022.868 - 15123.692: 91.7285% ( 75) 00:09:24.222 15123.692 - 15224.517: 92.3047% ( 59) 00:09:24.222 15224.517 - 15325.342: 92.8613% ( 57) 00:09:24.222 15325.342 - 15426.166: 93.4277% ( 58) 00:09:24.222 15426.166 - 15526.991: 93.9160% ( 50) 00:09:24.222 15526.991 - 15627.815: 94.4238% ( 52) 00:09:24.222 15627.815 - 15728.640: 94.7949% ( 38) 00:09:24.222 15728.640 - 15829.465: 95.0781% ( 29) 00:09:24.222 15829.465 - 15930.289: 95.3027% ( 23) 00:09:24.222 15930.289 - 16031.114: 95.5273% ( 23) 00:09:24.222 16031.114 - 16131.938: 95.8203% ( 30) 00:09:24.222 16131.938 - 16232.763: 96.0645% ( 25) 00:09:24.222 16232.763 - 16333.588: 96.3672% ( 31) 00:09:24.222 16333.588 - 16434.412: 96.6699% ( 31) 00:09:24.222 16434.412 - 16535.237: 96.9922% ( 33) 00:09:24.222 16535.237 - 16636.062: 97.2363% ( 25) 00:09:24.222 16636.062 - 16736.886: 97.4121% ( 18) 00:09:24.222 16736.886 - 16837.711: 97.5586% ( 15) 00:09:24.222 16837.711 - 16938.535: 97.6562% ( 10) 00:09:24.222 16938.535 - 17039.360: 97.7539% ( 10) 00:09:24.222 17039.360 - 17140.185: 97.8320% ( 8) 00:09:24.222 17140.185 - 17241.009: 97.9395% ( 11) 00:09:24.222 17241.009 - 17341.834: 98.0371% ( 10) 00:09:24.222 17341.834 - 17442.658: 98.1348% ( 10) 00:09:24.222 17442.658 - 17543.483: 98.2324% ( 10) 00:09:24.222 17543.483 - 17644.308: 98.3398% ( 11) 00:09:24.222 17644.308 - 17745.132: 98.4277% ( 9) 00:09:24.222 17745.132 - 17845.957: 98.5352% ( 11) 00:09:24.222 17845.957 - 17946.782: 98.6230% ( 9) 00:09:24.222 17946.782 - 18047.606: 98.6914% ( 7) 00:09:24.222 18047.606 - 18148.431: 98.7305% ( 4) 00:09:24.222 18148.431 - 18249.255: 98.7500% ( 2) 00:09:24.222 25609.452 - 25710.277: 98.7793% ( 3) 00:09:24.222 25710.277 - 25811.102: 98.8184% ( 4) 00:09:24.222 25811.102 - 26012.751: 98.9160% ( 10) 00:09:24.222 26012.751 - 26214.400: 99.0527% ( 14) 00:09:24.222 26214.400 - 26416.049: 99.1699% ( 12) 00:09:24.222 26416.049 - 26617.698: 99.2871% ( 12) 00:09:24.222 26617.698 - 26819.348: 99.4141% ( 13) 00:09:24.222 26819.348 - 27020.997: 99.5410% ( 13) 00:09:24.222 27020.997 - 27222.646: 99.6680% ( 13) 00:09:24.222 27222.646 - 27424.295: 99.7949% ( 13) 00:09:24.222 27424.295 - 27625.945: 99.9219% ( 13) 00:09:24.222 27625.945 - 27827.594: 100.0000% ( 8) 00:09:24.222 00:09:24.222 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:24.222 ============================================================================== 00:09:24.222 Range in us Cumulative IO count 00:09:24.222 6604.012 - 6654.425: 0.0293% ( 3) 00:09:24.222 6654.425 - 6704.837: 0.0586% ( 3) 00:09:24.222 6704.837 - 6755.249: 0.0879% ( 3) 00:09:24.222 6755.249 - 6805.662: 0.1270% ( 4) 00:09:24.222 6805.662 - 6856.074: 0.1660% ( 4) 00:09:24.222 6856.074 - 6906.486: 0.1953% ( 3) 00:09:24.222 6906.486 - 6956.898: 0.2246% ( 3) 00:09:24.222 6956.898 - 7007.311: 0.2539% ( 3) 00:09:24.222 7007.311 - 7057.723: 0.2930% ( 4) 00:09:24.222 7057.723 - 7108.135: 0.3320% ( 4) 00:09:24.222 7108.135 - 7158.548: 0.3711% ( 4) 00:09:24.222 7158.548 - 7208.960: 0.4102% ( 4) 00:09:24.222 7208.960 - 7259.372: 0.4492% ( 4) 00:09:24.222 7259.372 - 7309.785: 0.4785% ( 3) 00:09:24.222 7309.785 - 7360.197: 0.5176% ( 4) 00:09:24.222 7360.197 - 7410.609: 0.5566% ( 4) 00:09:24.222 7410.609 - 7461.022: 0.5859% ( 3) 00:09:24.222 7461.022 - 7511.434: 0.6152% ( 3) 00:09:24.222 7511.434 - 7561.846: 0.6543% ( 4) 00:09:24.222 7561.846 - 7612.258: 0.6836% ( 3) 00:09:24.222 7612.258 - 7662.671: 0.7227% ( 4) 00:09:24.222 7662.671 - 7713.083: 0.7520% ( 3) 00:09:24.222 7713.083 - 7763.495: 0.7812% ( 3) 00:09:24.222 7763.495 - 7813.908: 0.8105% ( 3) 00:09:24.222 7813.908 - 7864.320: 0.8398% ( 3) 00:09:24.222 7864.320 - 7914.732: 0.8691% ( 3) 00:09:24.222 7914.732 - 7965.145: 0.9082% ( 4) 00:09:24.222 7965.145 - 8015.557: 0.9375% ( 3) 00:09:24.222 8015.557 - 8065.969: 0.9668% ( 3) 00:09:24.222 8065.969 - 8116.382: 1.0059% ( 4) 00:09:24.222 8116.382 - 8166.794: 1.0352% ( 3) 00:09:24.222 8166.794 - 8217.206: 1.0645% ( 3) 00:09:24.222 8217.206 - 8267.618: 1.1035% ( 4) 00:09:24.222 8267.618 - 8318.031: 1.1230% ( 2) 00:09:24.222 8318.031 - 8368.443: 1.1523% ( 3) 00:09:24.222 8368.443 - 8418.855: 1.1914% ( 4) 00:09:24.222 8418.855 - 8469.268: 1.2207% ( 3) 00:09:24.222 8469.268 - 8519.680: 1.2500% ( 3) 00:09:24.222 8973.391 - 9023.803: 1.2695% ( 2) 00:09:24.222 9023.803 - 9074.215: 1.3574% ( 9) 00:09:24.222 9074.215 - 9124.628: 1.4453% ( 9) 00:09:24.222 9124.628 - 9175.040: 1.4844% ( 4) 00:09:24.222 9175.040 - 9225.452: 1.5332% ( 5) 00:09:24.222 9225.452 - 9275.865: 1.6211% ( 9) 00:09:24.222 9275.865 - 9326.277: 1.7773% ( 16) 00:09:24.222 9326.277 - 9376.689: 1.8555% ( 8) 00:09:24.222 9376.689 - 9427.102: 1.9141% ( 6) 00:09:24.222 9427.102 - 9477.514: 2.0020% ( 9) 00:09:24.222 9477.514 - 9527.926: 2.0801% ( 8) 00:09:24.222 9527.926 - 9578.338: 2.1777% ( 10) 00:09:24.222 9578.338 - 9628.751: 2.2754% ( 10) 00:09:24.222 9628.751 - 9679.163: 2.3828% ( 11) 00:09:24.222 9679.163 - 9729.575: 2.5391% ( 16) 00:09:24.222 9729.575 - 9779.988: 2.7832% ( 25) 00:09:24.222 9779.988 - 9830.400: 3.1250% ( 35) 00:09:24.222 9830.400 - 9880.812: 3.5059% ( 39) 00:09:24.222 9880.812 - 9931.225: 3.9941% ( 50) 00:09:24.222 9931.225 - 9981.637: 4.4922% ( 51) 00:09:24.222 9981.637 - 10032.049: 4.9805% ( 50) 00:09:24.222 10032.049 - 10082.462: 5.4883% ( 52) 00:09:24.222 10082.462 - 10132.874: 5.9766% ( 50) 00:09:24.222 10132.874 - 10183.286: 6.4746% ( 51) 00:09:24.222 10183.286 - 10233.698: 7.0215% ( 56) 00:09:24.222 10233.698 - 10284.111: 7.6270% ( 62) 00:09:24.222 10284.111 - 10334.523: 8.3008% ( 69) 00:09:24.222 10334.523 - 10384.935: 8.9746% ( 69) 00:09:24.222 10384.935 - 10435.348: 9.5703% ( 61) 00:09:24.222 10435.348 - 10485.760: 10.2344% ( 68) 00:09:24.222 10485.760 - 10536.172: 10.9375% ( 72) 00:09:24.222 10536.172 - 10586.585: 11.6992% ( 78) 00:09:24.222 10586.585 - 10636.997: 12.3633% ( 68) 00:09:24.222 10636.997 - 10687.409: 13.1641% ( 82) 00:09:24.222 10687.409 - 10737.822: 13.9551% ( 81) 00:09:24.222 10737.822 - 10788.234: 14.7559% ( 82) 00:09:24.222 10788.234 - 10838.646: 15.6641% ( 93) 00:09:24.222 10838.646 - 10889.058: 16.5430% ( 90) 00:09:24.222 10889.058 - 10939.471: 17.4609% ( 94) 00:09:24.222 10939.471 - 10989.883: 18.3594% ( 92) 00:09:24.222 10989.883 - 11040.295: 19.2773% ( 94) 00:09:24.222 11040.295 - 11090.708: 20.1074% ( 85) 00:09:24.222 11090.708 - 11141.120: 21.0449% ( 96) 00:09:24.222 11141.120 - 11191.532: 22.0312% ( 101) 00:09:24.222 11191.532 - 11241.945: 23.0176% ( 101) 00:09:24.222 11241.945 - 11292.357: 23.9941% ( 100) 00:09:24.222 11292.357 - 11342.769: 24.9512% ( 98) 00:09:24.222 11342.769 - 11393.182: 25.8496% ( 92) 00:09:24.222 11393.182 - 11443.594: 26.8359% ( 101) 00:09:24.222 11443.594 - 11494.006: 27.8809% ( 107) 00:09:24.222 11494.006 - 11544.418: 29.0527% ( 120) 00:09:24.222 11544.418 - 11594.831: 30.2051% ( 118) 00:09:24.222 11594.831 - 11645.243: 31.3770% ( 120) 00:09:24.222 11645.243 - 11695.655: 32.5977% ( 125) 00:09:24.222 11695.655 - 11746.068: 33.7012% ( 113) 00:09:24.222 11746.068 - 11796.480: 34.9512% ( 128) 00:09:24.222 11796.480 - 11846.892: 36.0840% ( 116) 00:09:24.222 11846.892 - 11897.305: 37.2949% ( 124) 00:09:24.222 11897.305 - 11947.717: 38.4668% ( 120) 00:09:24.222 11947.717 - 11998.129: 39.7363% ( 130) 00:09:24.222 11998.129 - 12048.542: 41.0645% ( 136) 00:09:24.222 12048.542 - 12098.954: 42.3438% ( 131) 00:09:24.222 12098.954 - 12149.366: 43.6035% ( 129) 00:09:24.222 12149.366 - 12199.778: 44.8633% ( 129) 00:09:24.222 12199.778 - 12250.191: 46.1621% ( 133) 00:09:24.222 12250.191 - 12300.603: 47.5586% ( 143) 00:09:24.222 12300.603 - 12351.015: 48.8379% ( 131) 00:09:24.222 12351.015 - 12401.428: 50.1562% ( 135) 00:09:24.222 12401.428 - 12451.840: 51.4746% ( 135) 00:09:24.222 12451.840 - 12502.252: 52.7930% ( 135) 00:09:24.222 12502.252 - 12552.665: 54.0137% ( 125) 00:09:24.222 12552.665 - 12603.077: 55.3125% ( 133) 00:09:24.222 12603.077 - 12653.489: 56.5723% ( 129) 00:09:24.222 12653.489 - 12703.902: 57.8320% ( 129) 00:09:24.222 12703.902 - 12754.314: 59.1406% ( 134) 00:09:24.222 12754.314 - 12804.726: 60.3223% ( 121) 00:09:24.222 12804.726 - 12855.138: 61.4551% ( 116) 00:09:24.222 12855.138 - 12905.551: 62.6953% ( 127) 00:09:24.222 12905.551 - 13006.375: 64.9512% ( 231) 00:09:24.222 13006.375 - 13107.200: 67.1582% ( 226) 00:09:24.222 13107.200 - 13208.025: 69.3262% ( 222) 00:09:24.222 13208.025 - 13308.849: 71.4160% ( 214) 00:09:24.222 13308.849 - 13409.674: 73.2227% ( 185) 00:09:24.222 13409.674 - 13510.498: 74.9707% ( 179) 00:09:24.222 13510.498 - 13611.323: 76.5332% ( 160) 00:09:24.222 13611.323 - 13712.148: 77.9395% ( 144) 00:09:24.222 13712.148 - 13812.972: 79.2969% ( 139) 00:09:24.222 13812.972 - 13913.797: 80.5078% ( 124) 00:09:24.222 13913.797 - 14014.622: 81.5820% ( 110) 00:09:24.222 14014.622 - 14115.446: 82.6172% ( 106) 00:09:24.222 14115.446 - 14216.271: 83.5840% ( 99) 00:09:24.222 14216.271 - 14317.095: 84.5996% ( 104) 00:09:24.222 14317.095 - 14417.920: 85.5957% ( 102) 00:09:24.222 14417.920 - 14518.745: 86.5918% ( 102) 00:09:24.222 14518.745 - 14619.569: 87.5391% ( 97) 00:09:24.222 14619.569 - 14720.394: 88.4277% ( 91) 00:09:24.222 14720.394 - 14821.218: 89.3359% ( 93) 00:09:24.222 14821.218 - 14922.043: 90.0977% ( 78) 00:09:24.222 14922.043 - 15022.868: 90.8105% ( 73) 00:09:24.222 15022.868 - 15123.692: 91.5137% ( 72) 00:09:24.222 15123.692 - 15224.517: 92.2656% ( 77) 00:09:24.222 15224.517 - 15325.342: 92.8809% ( 63) 00:09:24.222 15325.342 - 15426.166: 93.4668% ( 60) 00:09:24.222 15426.166 - 15526.991: 94.0527% ( 60) 00:09:24.222 15526.991 - 15627.815: 94.5703% ( 53) 00:09:24.222 15627.815 - 15728.640: 94.9902% ( 43) 00:09:24.222 15728.640 - 15829.465: 95.4004% ( 42) 00:09:24.222 15829.465 - 15930.289: 95.7129% ( 32) 00:09:24.223 15930.289 - 16031.114: 96.0156% ( 31) 00:09:24.223 16031.114 - 16131.938: 96.2988% ( 29) 00:09:24.223 16131.938 - 16232.763: 96.6016% ( 31) 00:09:24.223 16232.763 - 16333.588: 96.8555% ( 26) 00:09:24.223 16333.588 - 16434.412: 97.0898% ( 24) 00:09:24.223 16434.412 - 16535.237: 97.2754% ( 19) 00:09:24.223 16535.237 - 16636.062: 97.4219% ( 15) 00:09:24.223 16636.062 - 16736.886: 97.5879% ( 17) 00:09:24.223 16736.886 - 16837.711: 97.7344% ( 15) 00:09:24.223 16837.711 - 16938.535: 97.9004% ( 17) 00:09:24.223 16938.535 - 17039.360: 98.0566% ( 16) 00:09:24.223 17039.360 - 17140.185: 98.1738% ( 12) 00:09:24.223 17140.185 - 17241.009: 98.2715% ( 10) 00:09:24.223 17241.009 - 17341.834: 98.3594% ( 9) 00:09:24.223 17341.834 - 17442.658: 98.4570% ( 10) 00:09:24.223 17442.658 - 17543.483: 98.5449% ( 9) 00:09:24.223 17543.483 - 17644.308: 98.6133% ( 7) 00:09:24.223 17644.308 - 17745.132: 98.6523% ( 4) 00:09:24.223 17745.132 - 17845.957: 98.6719% ( 2) 00:09:24.223 17845.957 - 17946.782: 98.7109% ( 4) 00:09:24.223 17946.782 - 18047.606: 98.7402% ( 3) 00:09:24.223 18047.606 - 18148.431: 98.7500% ( 1) 00:09:24.223 26214.400 - 26416.049: 98.7695% ( 2) 00:09:24.223 26416.049 - 26617.698: 98.8867% ( 12) 00:09:24.223 26617.698 - 26819.348: 99.0332% ( 15) 00:09:24.223 26819.348 - 27020.997: 99.1602% ( 13) 00:09:24.223 27020.997 - 27222.646: 99.2871% ( 13) 00:09:24.223 27222.646 - 27424.295: 99.4141% ( 13) 00:09:24.223 27424.295 - 27625.945: 99.5410% ( 13) 00:09:24.223 27625.945 - 27827.594: 99.6777% ( 14) 00:09:24.223 27827.594 - 28029.243: 99.8047% ( 13) 00:09:24.223 28029.243 - 28230.892: 99.9414% ( 14) 00:09:24.223 28230.892 - 28432.542: 100.0000% ( 6) 00:09:24.223 00:09:24.223 14:55:47 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:25.192 Initializing NVMe Controllers 00:09:25.192 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:25.192 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:25.192 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:25.192 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:25.192 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:25.192 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:25.192 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:25.192 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:25.192 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:25.192 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:25.192 Initialization complete. Launching workers. 00:09:25.192 ======================================================== 00:09:25.192 Latency(us) 00:09:25.192 Device Information : IOPS MiB/s Average min max 00:09:25.192 PCIE (0000:00:09.0) NSID 1 from core 0: 11208.07 131.34 11417.00 7044.65 21480.93 00:09:25.192 PCIE (0000:00:06.0) NSID 1 from core 0: 11208.07 131.34 11423.14 7257.33 22220.87 00:09:25.192 PCIE (0000:00:07.0) NSID 1 from core 0: 11208.07 131.34 11416.62 7473.62 22174.03 00:09:25.192 PCIE (0000:00:08.0) NSID 1 from core 0: 11208.07 131.34 11410.06 7090.18 23833.81 00:09:25.192 PCIE (0000:00:08.0) NSID 2 from core 0: 11208.07 131.34 11403.48 5951.03 23929.11 00:09:25.192 PCIE (0000:00:08.0) NSID 3 from core 0: 11208.07 131.34 11397.52 5203.92 24832.75 00:09:25.192 ======================================================== 00:09:25.192 Total : 67248.43 788.07 11411.30 5203.92 24832.75 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 7612.258us 00:09:25.192 10.00000% : 9527.926us 00:09:25.192 25.00000% : 10082.462us 00:09:25.192 50.00000% : 10989.883us 00:09:25.192 75.00000% : 12552.665us 00:09:25.192 90.00000% : 13812.972us 00:09:25.192 95.00000% : 14821.218us 00:09:25.192 98.00000% : 16535.237us 00:09:25.192 99.00000% : 19358.326us 00:09:25.192 99.50000% : 20265.748us 00:09:25.192 99.90000% : 21273.994us 00:09:25.192 99.99000% : 21475.643us 00:09:25.192 99.99900% : 21576.468us 00:09:25.192 99.99990% : 21576.468us 00:09:25.192 99.99999% : 21576.468us 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 7763.495us 00:09:25.192 10.00000% : 9275.865us 00:09:25.192 25.00000% : 10032.049us 00:09:25.192 50.00000% : 11090.708us 00:09:25.192 75.00000% : 12603.077us 00:09:25.192 90.00000% : 14014.622us 00:09:25.192 95.00000% : 14821.218us 00:09:25.192 98.00000% : 16535.237us 00:09:25.192 99.00000% : 19559.975us 00:09:25.192 99.50000% : 20870.695us 00:09:25.192 99.90000% : 21979.766us 00:09:25.192 99.99000% : 22282.240us 00:09:25.192 99.99900% : 22282.240us 00:09:25.192 99.99990% : 22282.240us 00:09:25.192 99.99999% : 22282.240us 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 8065.969us 00:09:25.192 10.00000% : 9427.102us 00:09:25.192 25.00000% : 10082.462us 00:09:25.192 50.00000% : 10989.883us 00:09:25.192 75.00000% : 12401.428us 00:09:25.192 90.00000% : 13913.797us 00:09:25.192 95.00000% : 15426.166us 00:09:25.192 98.00000% : 16736.886us 00:09:25.192 99.00000% : 20568.222us 00:09:25.192 99.50000% : 21273.994us 00:09:25.192 99.90000% : 21979.766us 00:09:25.192 99.99000% : 22181.415us 00:09:25.192 99.99900% : 22181.415us 00:09:25.192 99.99990% : 22181.415us 00:09:25.192 99.99999% : 22181.415us 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 7662.671us 00:09:25.192 10.00000% : 9477.514us 00:09:25.192 25.00000% : 10032.049us 00:09:25.192 50.00000% : 10889.058us 00:09:25.192 75.00000% : 12300.603us 00:09:25.192 90.00000% : 14216.271us 00:09:25.192 95.00000% : 15526.991us 00:09:25.192 98.00000% : 16535.237us 00:09:25.192 99.00000% : 22383.065us 00:09:25.192 99.50000% : 22988.012us 00:09:25.192 99.90000% : 23693.785us 00:09:25.192 99.99000% : 23895.434us 00:09:25.192 99.99900% : 23895.434us 00:09:25.192 99.99990% : 23895.434us 00:09:25.192 99.99999% : 23895.434us 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 6906.486us 00:09:25.192 10.00000% : 9527.926us 00:09:25.192 25.00000% : 10082.462us 00:09:25.192 50.00000% : 10989.883us 00:09:25.192 75.00000% : 12250.191us 00:09:25.192 90.00000% : 13812.972us 00:09:25.192 95.00000% : 15426.166us 00:09:25.192 98.00000% : 16535.237us 00:09:25.192 99.00000% : 22584.714us 00:09:25.192 99.50000% : 23088.837us 00:09:25.192 99.90000% : 23794.609us 00:09:25.192 99.99000% : 23996.258us 00:09:25.192 99.99900% : 23996.258us 00:09:25.192 99.99990% : 23996.258us 00:09:25.192 99.99999% : 23996.258us 00:09:25.192 00:09:25.192 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:25.192 ================================================================================= 00:09:25.192 1.00000% : 6175.508us 00:09:25.192 10.00000% : 9477.514us 00:09:25.192 25.00000% : 10082.462us 00:09:25.192 50.00000% : 10989.883us 00:09:25.192 75.00000% : 12401.428us 00:09:25.192 90.00000% : 13812.972us 00:09:25.192 95.00000% : 15022.868us 00:09:25.192 98.00000% : 16535.237us 00:09:25.192 99.00000% : 23492.135us 00:09:25.192 99.50000% : 24097.083us 00:09:25.192 99.90000% : 24702.031us 00:09:25.192 99.99000% : 24903.680us 00:09:25.192 99.99900% : 24903.680us 00:09:25.192 99.99990% : 24903.680us 00:09:25.192 99.99999% : 24903.680us 00:09:25.192 00:09:25.192 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:25.192 ============================================================================== 00:09:25.192 Range in us Cumulative IO count 00:09:25.192 7007.311 - 7057.723: 0.0089% ( 1) 00:09:25.192 7057.723 - 7108.135: 0.0533% ( 5) 00:09:25.192 7108.135 - 7158.548: 0.0888% ( 4) 00:09:25.192 7158.548 - 7208.960: 0.1154% ( 3) 00:09:25.192 7208.960 - 7259.372: 0.1598% ( 5) 00:09:25.192 7259.372 - 7309.785: 0.2131% ( 6) 00:09:25.193 7309.785 - 7360.197: 0.2752% ( 7) 00:09:25.193 7360.197 - 7410.609: 0.3462% ( 8) 00:09:25.193 7410.609 - 7461.022: 0.4439% ( 11) 00:09:25.193 7461.022 - 7511.434: 0.6303% ( 21) 00:09:25.193 7511.434 - 7561.846: 0.9144% ( 32) 00:09:25.193 7561.846 - 7612.258: 1.0742% ( 18) 00:09:25.193 7612.258 - 7662.671: 1.2251% ( 17) 00:09:25.193 7662.671 - 7713.083: 1.6069% ( 43) 00:09:25.193 7713.083 - 7763.495: 1.7578% ( 17) 00:09:25.193 7763.495 - 7813.908: 1.8821% ( 14) 00:09:25.193 7813.908 - 7864.320: 1.9709% ( 10) 00:09:25.193 7864.320 - 7914.732: 2.0419% ( 8) 00:09:25.193 7914.732 - 7965.145: 2.1218% ( 9) 00:09:25.193 7965.145 - 8015.557: 2.1928% ( 8) 00:09:25.193 8015.557 - 8065.969: 2.2283% ( 4) 00:09:25.193 8065.969 - 8116.382: 2.2550% ( 3) 00:09:25.193 8116.382 - 8166.794: 2.2727% ( 2) 00:09:25.193 8368.443 - 8418.855: 2.3171% ( 5) 00:09:25.193 8418.855 - 8469.268: 2.3704% ( 6) 00:09:25.193 8469.268 - 8519.680: 2.4592% ( 10) 00:09:25.193 8519.680 - 8570.092: 2.5568% ( 11) 00:09:25.193 8570.092 - 8620.505: 2.6545% ( 11) 00:09:25.193 8620.505 - 8670.917: 2.8409% ( 21) 00:09:25.193 8670.917 - 8721.329: 2.9918% ( 17) 00:09:25.193 8721.329 - 8771.742: 3.1605% ( 19) 00:09:25.193 8771.742 - 8822.154: 3.2937% ( 15) 00:09:25.193 8822.154 - 8872.566: 3.4091% ( 13) 00:09:25.193 8872.566 - 8922.978: 3.6222% ( 24) 00:09:25.193 8922.978 - 8973.391: 3.9418% ( 36) 00:09:25.193 8973.391 - 9023.803: 4.2170% ( 31) 00:09:25.193 9023.803 - 9074.215: 4.4744% ( 29) 00:09:25.193 9074.215 - 9124.628: 4.7053% ( 26) 00:09:25.193 9124.628 - 9175.040: 5.0515% ( 39) 00:09:25.193 9175.040 - 9225.452: 5.5753% ( 59) 00:09:25.193 9225.452 - 9275.865: 6.2589% ( 77) 00:09:25.193 9275.865 - 9326.277: 6.9869% ( 82) 00:09:25.193 9326.277 - 9376.689: 7.7504% ( 86) 00:09:25.193 9376.689 - 9427.102: 8.6381% ( 100) 00:09:25.193 9427.102 - 9477.514: 9.5881% ( 107) 00:09:25.193 9477.514 - 9527.926: 10.7244% ( 128) 00:09:25.193 9527.926 - 9578.338: 11.7543% ( 116) 00:09:25.193 9578.338 - 9628.751: 12.9173% ( 131) 00:09:25.193 9628.751 - 9679.163: 13.9293% ( 114) 00:09:25.193 9679.163 - 9729.575: 15.0568% ( 127) 00:09:25.193 9729.575 - 9779.988: 16.4240% ( 154) 00:09:25.193 9779.988 - 9830.400: 17.9776% ( 175) 00:09:25.193 9830.400 - 9880.812: 19.4158% ( 162) 00:09:25.193 9880.812 - 9931.225: 20.9251% ( 170) 00:09:25.193 9931.225 - 9981.637: 22.5408% ( 182) 00:09:25.193 9981.637 - 10032.049: 24.1211% ( 178) 00:09:25.193 10032.049 - 10082.462: 25.6925% ( 177) 00:09:25.193 10082.462 - 10132.874: 27.1129% ( 160) 00:09:25.193 10132.874 - 10183.286: 28.5689% ( 164) 00:09:25.193 10183.286 - 10233.698: 30.0071% ( 162) 00:09:25.193 10233.698 - 10284.111: 31.8271% ( 205) 00:09:25.193 10284.111 - 10334.523: 33.5227% ( 191) 00:09:25.193 10334.523 - 10384.935: 35.0586% ( 173) 00:09:25.193 10384.935 - 10435.348: 36.5146% ( 164) 00:09:25.193 10435.348 - 10485.760: 38.0060% ( 168) 00:09:25.193 10485.760 - 10536.172: 39.2756% ( 143) 00:09:25.193 10536.172 - 10586.585: 40.4386% ( 131) 00:09:25.193 10586.585 - 10636.997: 41.7791% ( 151) 00:09:25.193 10636.997 - 10687.409: 43.0043% ( 138) 00:09:25.193 10687.409 - 10737.822: 44.1761% ( 132) 00:09:25.193 10737.822 - 10788.234: 45.2592% ( 122) 00:09:25.193 10788.234 - 10838.646: 46.4666% ( 136) 00:09:25.193 10838.646 - 10889.058: 47.5941% ( 127) 00:09:25.193 10889.058 - 10939.471: 48.9169% ( 149) 00:09:25.193 10939.471 - 10989.883: 50.6303% ( 193) 00:09:25.193 10989.883 - 11040.295: 51.8821% ( 141) 00:09:25.193 11040.295 - 11090.708: 52.9386% ( 119) 00:09:25.193 11090.708 - 11141.120: 53.8796% ( 106) 00:09:25.193 11141.120 - 11191.532: 54.7319% ( 96) 00:09:25.193 11191.532 - 11241.945: 55.5930% ( 97) 00:09:25.193 11241.945 - 11292.357: 56.3477% ( 85) 00:09:25.193 11292.357 - 11342.769: 57.1200% ( 87) 00:09:25.193 11342.769 - 11393.182: 57.9812% ( 97) 00:09:25.193 11393.182 - 11443.594: 58.6825% ( 79) 00:09:25.193 11443.594 - 11494.006: 59.3839% ( 79) 00:09:25.193 11494.006 - 11544.418: 60.1296% ( 84) 00:09:25.193 11544.418 - 11594.831: 60.8754% ( 84) 00:09:25.193 11594.831 - 11645.243: 61.6477% ( 87) 00:09:25.193 11645.243 - 11695.655: 62.5533% ( 102) 00:09:25.193 11695.655 - 11746.068: 63.2901% ( 83) 00:09:25.193 11746.068 - 11796.480: 63.9560% ( 75) 00:09:25.193 11796.480 - 11846.892: 64.5597% ( 68) 00:09:25.193 11846.892 - 11897.305: 65.2433% ( 77) 00:09:25.193 11897.305 - 11947.717: 66.0334% ( 89) 00:09:25.193 11947.717 - 11998.129: 66.6548% ( 70) 00:09:25.193 11998.129 - 12048.542: 67.2763% ( 70) 00:09:25.193 12048.542 - 12098.954: 68.0930% ( 92) 00:09:25.193 12098.954 - 12149.366: 68.7500% ( 74) 00:09:25.193 12149.366 - 12199.778: 69.9929% ( 140) 00:09:25.193 12199.778 - 12250.191: 70.7120% ( 81) 00:09:25.193 12250.191 - 12300.603: 71.5199% ( 91) 00:09:25.193 12300.603 - 12351.015: 72.3810% ( 97) 00:09:25.193 12351.015 - 12401.428: 73.1268% ( 84) 00:09:25.193 12401.428 - 12451.840: 73.8281% ( 79) 00:09:25.193 12451.840 - 12502.252: 74.5561% ( 82) 00:09:25.193 12502.252 - 12552.665: 75.2397% ( 77) 00:09:25.193 12552.665 - 12603.077: 75.9766% ( 83) 00:09:25.193 12603.077 - 12653.489: 76.6779% ( 79) 00:09:25.193 12653.489 - 12703.902: 77.3970% ( 81) 00:09:25.193 12703.902 - 12754.314: 78.2049% ( 91) 00:09:25.193 12754.314 - 12804.726: 78.9418% ( 83) 00:09:25.193 12804.726 - 12855.138: 79.7141% ( 87) 00:09:25.193 12855.138 - 12905.551: 80.5043% ( 89) 00:09:25.193 12905.551 - 13006.375: 82.0046% ( 169) 00:09:25.193 13006.375 - 13107.200: 83.5138% ( 170) 00:09:25.193 13107.200 - 13208.025: 84.5969% ( 122) 00:09:25.193 13208.025 - 13308.849: 85.7244% ( 127) 00:09:25.193 13308.849 - 13409.674: 86.7010% ( 110) 00:09:25.193 13409.674 - 13510.498: 87.6154% ( 103) 00:09:25.193 13510.498 - 13611.323: 88.6630% ( 118) 00:09:25.193 13611.323 - 13712.148: 89.5241% ( 97) 00:09:25.193 13712.148 - 13812.972: 90.2788% ( 85) 00:09:25.193 13812.972 - 13913.797: 90.9268% ( 73) 00:09:25.193 13913.797 - 14014.622: 91.5483% ( 70) 00:09:25.193 14014.622 - 14115.446: 92.1875% ( 72) 00:09:25.193 14115.446 - 14216.271: 92.7912% ( 68) 00:09:25.193 14216.271 - 14317.095: 93.2440% ( 51) 00:09:25.193 14317.095 - 14417.920: 93.6612% ( 47) 00:09:25.193 14417.920 - 14518.745: 94.0607% ( 45) 00:09:25.193 14518.745 - 14619.569: 94.3981% ( 38) 00:09:25.193 14619.569 - 14720.394: 94.8864% ( 55) 00:09:25.193 14720.394 - 14821.218: 95.2060% ( 36) 00:09:25.193 14821.218 - 14922.043: 95.5167% ( 35) 00:09:25.193 14922.043 - 15022.868: 95.8629% ( 39) 00:09:25.193 15022.868 - 15123.692: 96.1914% ( 37) 00:09:25.193 15123.692 - 15224.517: 96.3867% ( 22) 00:09:25.193 15224.517 - 15325.342: 96.5465% ( 18) 00:09:25.193 15325.342 - 15426.166: 96.7418% ( 22) 00:09:25.193 15426.166 - 15526.991: 96.9549% ( 24) 00:09:25.193 15526.991 - 15627.815: 97.0881% ( 15) 00:09:25.193 15627.815 - 15728.640: 97.2212% ( 15) 00:09:25.193 15728.640 - 15829.465: 97.3633% ( 16) 00:09:25.193 15829.465 - 15930.289: 97.4698% ( 12) 00:09:25.193 15930.289 - 16031.114: 97.5586% ( 10) 00:09:25.193 16031.114 - 16131.938: 97.6562% ( 11) 00:09:25.193 16131.938 - 16232.763: 97.7539% ( 11) 00:09:25.193 16232.763 - 16333.588: 97.8516% ( 11) 00:09:25.193 16333.588 - 16434.412: 97.9492% ( 11) 00:09:25.193 16434.412 - 16535.237: 98.0558% ( 12) 00:09:25.193 16535.237 - 16636.062: 98.1268% ( 8) 00:09:25.193 16636.062 - 16736.886: 98.1889% ( 7) 00:09:25.193 16736.886 - 16837.711: 98.2599% ( 8) 00:09:25.193 16837.711 - 16938.535: 98.3931% ( 15) 00:09:25.193 16938.535 - 17039.360: 98.4464% ( 6) 00:09:25.193 17039.360 - 17140.185: 98.4819% ( 4) 00:09:25.193 17140.185 - 17241.009: 98.5085% ( 3) 00:09:25.193 17241.009 - 17341.834: 98.5529% ( 5) 00:09:25.193 17341.834 - 17442.658: 98.5884% ( 4) 00:09:25.193 17442.658 - 17543.483: 98.6328% ( 5) 00:09:25.193 17543.483 - 17644.308: 98.6683% ( 4) 00:09:25.193 17644.308 - 17745.132: 98.7127% ( 5) 00:09:25.193 17745.132 - 17845.957: 98.7482% ( 4) 00:09:25.193 17845.957 - 17946.782: 98.7926% ( 5) 00:09:25.193 17946.782 - 18047.606: 98.8370% ( 5) 00:09:25.193 18047.606 - 18148.431: 98.8636% ( 3) 00:09:25.193 18955.028 - 19055.852: 98.8725% ( 1) 00:09:25.193 19055.852 - 19156.677: 98.8903% ( 2) 00:09:25.193 19156.677 - 19257.502: 98.9524% ( 7) 00:09:25.193 19257.502 - 19358.326: 99.0146% ( 7) 00:09:25.193 19358.326 - 19459.151: 99.0767% ( 7) 00:09:25.193 19459.151 - 19559.975: 99.1300% ( 6) 00:09:25.193 19559.975 - 19660.800: 99.1921% ( 7) 00:09:25.193 19660.800 - 19761.625: 99.2543% ( 7) 00:09:25.193 19761.625 - 19862.449: 99.3075% ( 6) 00:09:25.193 19862.449 - 19963.274: 99.3608% ( 6) 00:09:25.193 19963.274 - 20064.098: 99.4229% ( 7) 00:09:25.193 20064.098 - 20164.923: 99.4851% ( 7) 00:09:25.193 20164.923 - 20265.748: 99.5295% ( 5) 00:09:25.193 20265.748 - 20366.572: 99.5827% ( 6) 00:09:25.193 20366.572 - 20467.397: 99.6183% ( 4) 00:09:25.193 20467.397 - 20568.222: 99.6626% ( 5) 00:09:25.193 20568.222 - 20669.046: 99.6982% ( 4) 00:09:25.193 20669.046 - 20769.871: 99.7337% ( 4) 00:09:25.193 20769.871 - 20870.695: 99.7692% ( 4) 00:09:25.194 20870.695 - 20971.520: 99.7958% ( 3) 00:09:25.194 20971.520 - 21072.345: 99.8402% ( 5) 00:09:25.194 21072.345 - 21173.169: 99.8757% ( 4) 00:09:25.194 21173.169 - 21273.994: 99.9112% ( 4) 00:09:25.194 21273.994 - 21374.818: 99.9556% ( 5) 00:09:25.194 21374.818 - 21475.643: 99.9911% ( 4) 00:09:25.194 21475.643 - 21576.468: 100.0000% ( 1) 00:09:25.194 00:09:25.194 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:25.194 ============================================================================== 00:09:25.194 Range in us Cumulative IO count 00:09:25.194 7208.960 - 7259.372: 0.0178% ( 2) 00:09:25.194 7259.372 - 7309.785: 0.0444% ( 3) 00:09:25.194 7309.785 - 7360.197: 0.0621% ( 2) 00:09:25.194 7360.197 - 7410.609: 0.1332% ( 8) 00:09:25.194 7410.609 - 7461.022: 0.2308% ( 11) 00:09:25.194 7461.022 - 7511.434: 0.3817% ( 17) 00:09:25.194 7511.434 - 7561.846: 0.4794% ( 11) 00:09:25.194 7561.846 - 7612.258: 0.5415% ( 7) 00:09:25.194 7612.258 - 7662.671: 0.6747% ( 15) 00:09:25.194 7662.671 - 7713.083: 0.7990% ( 14) 00:09:25.194 7713.083 - 7763.495: 1.1009% ( 34) 00:09:25.194 7763.495 - 7813.908: 1.4205% ( 36) 00:09:25.194 7813.908 - 7864.320: 1.6779% ( 29) 00:09:25.194 7864.320 - 7914.732: 1.9265% ( 28) 00:09:25.194 7914.732 - 7965.145: 2.0330% ( 12) 00:09:25.194 7965.145 - 8015.557: 2.1040% ( 8) 00:09:25.194 8015.557 - 8065.969: 2.2195% ( 13) 00:09:25.194 8065.969 - 8116.382: 2.3260% ( 12) 00:09:25.194 8116.382 - 8166.794: 2.4592% ( 15) 00:09:25.194 8166.794 - 8217.206: 2.6456% ( 21) 00:09:25.194 8217.206 - 8267.618: 2.7166% ( 8) 00:09:25.194 8267.618 - 8318.031: 2.7788% ( 7) 00:09:25.194 8318.031 - 8368.443: 2.8143% ( 4) 00:09:25.194 8368.443 - 8418.855: 2.8942% ( 9) 00:09:25.194 8418.855 - 8469.268: 3.0451% ( 17) 00:09:25.194 8469.268 - 8519.680: 3.2049% ( 18) 00:09:25.194 8519.680 - 8570.092: 3.4268% ( 25) 00:09:25.194 8570.092 - 8620.505: 3.6133% ( 21) 00:09:25.194 8620.505 - 8670.917: 3.8619% ( 28) 00:09:25.194 8670.917 - 8721.329: 4.1726% ( 35) 00:09:25.194 8721.329 - 8771.742: 4.5099% ( 38) 00:09:25.194 8771.742 - 8822.154: 4.8207% ( 35) 00:09:25.194 8822.154 - 8872.566: 5.1048% ( 32) 00:09:25.194 8872.566 - 8922.978: 5.6641% ( 63) 00:09:25.194 8922.978 - 8973.391: 6.1701% ( 57) 00:09:25.194 8973.391 - 9023.803: 6.6317% ( 52) 00:09:25.194 9023.803 - 9074.215: 7.1644% ( 60) 00:09:25.194 9074.215 - 9124.628: 7.8746% ( 80) 00:09:25.194 9124.628 - 9175.040: 8.6026% ( 82) 00:09:25.194 9175.040 - 9225.452: 9.2063% ( 68) 00:09:25.194 9225.452 - 9275.865: 10.0497% ( 95) 00:09:25.194 9275.865 - 9326.277: 11.0352% ( 111) 00:09:25.194 9326.277 - 9376.689: 12.1538% ( 126) 00:09:25.194 9376.689 - 9427.102: 13.3700% ( 137) 00:09:25.194 9427.102 - 9477.514: 14.3200% ( 107) 00:09:25.194 9477.514 - 9527.926: 15.0835% ( 86) 00:09:25.194 9527.926 - 9578.338: 15.8736% ( 89) 00:09:25.194 9578.338 - 9628.751: 16.5838% ( 80) 00:09:25.194 9628.751 - 9679.163: 17.5249% ( 106) 00:09:25.194 9679.163 - 9729.575: 18.8210% ( 146) 00:09:25.194 9729.575 - 9779.988: 20.0728% ( 141) 00:09:25.194 9779.988 - 9830.400: 21.4755% ( 158) 00:09:25.194 9830.400 - 9880.812: 22.4254% ( 107) 00:09:25.194 9880.812 - 9931.225: 23.4641% ( 117) 00:09:25.194 9931.225 - 9981.637: 24.6538% ( 134) 00:09:25.194 9981.637 - 10032.049: 25.9854% ( 150) 00:09:25.194 10032.049 - 10082.462: 27.2550% ( 143) 00:09:25.194 10082.462 - 10132.874: 28.4002% ( 129) 00:09:25.194 10132.874 - 10183.286: 29.6254% ( 138) 00:09:25.194 10183.286 - 10233.698: 30.7173% ( 123) 00:09:25.194 10233.698 - 10284.111: 31.8004% ( 122) 00:09:25.194 10284.111 - 10334.523: 32.9989% ( 135) 00:09:25.194 10334.523 - 10384.935: 34.0288% ( 116) 00:09:25.194 10384.935 - 10435.348: 35.2006% ( 132) 00:09:25.194 10435.348 - 10485.760: 36.3192% ( 126) 00:09:25.194 10485.760 - 10536.172: 37.5888% ( 143) 00:09:25.194 10536.172 - 10586.585: 38.6009% ( 114) 00:09:25.194 10586.585 - 10636.997: 39.8260% ( 138) 00:09:25.194 10636.997 - 10687.409: 40.9268% ( 124) 00:09:25.194 10687.409 - 10737.822: 42.0455% ( 126) 00:09:25.194 10737.822 - 10788.234: 43.2706% ( 138) 00:09:25.194 10788.234 - 10838.646: 44.3271% ( 119) 00:09:25.194 10838.646 - 10889.058: 45.5167% ( 134) 00:09:25.194 10889.058 - 10939.471: 46.6708% ( 130) 00:09:25.194 10939.471 - 10989.883: 47.8427% ( 132) 00:09:25.194 10989.883 - 11040.295: 49.0146% ( 132) 00:09:25.194 11040.295 - 11090.708: 50.0888% ( 121) 00:09:25.194 11090.708 - 11141.120: 51.2074% ( 126) 00:09:25.194 11141.120 - 11191.532: 52.2816% ( 121) 00:09:25.194 11191.532 - 11241.945: 53.3114% ( 116) 00:09:25.194 11241.945 - 11292.357: 54.3768% ( 120) 00:09:25.194 11292.357 - 11342.769: 55.2734% ( 101) 00:09:25.194 11342.769 - 11393.182: 56.3477% ( 121) 00:09:25.194 11393.182 - 11443.594: 57.1467% ( 90) 00:09:25.194 11443.594 - 11494.006: 58.2919% ( 129) 00:09:25.194 11494.006 - 11544.418: 59.1175% ( 93) 00:09:25.194 11544.418 - 11594.831: 60.3072% ( 134) 00:09:25.194 11594.831 - 11645.243: 61.3814% ( 121) 00:09:25.194 11645.243 - 11695.655: 62.5000% ( 126) 00:09:25.194 11695.655 - 11746.068: 63.2102% ( 80) 00:09:25.194 11746.068 - 11796.480: 64.0803% ( 98) 00:09:25.194 11796.480 - 11846.892: 65.0657% ( 111) 00:09:25.194 11846.892 - 11897.305: 65.9268% ( 97) 00:09:25.194 11897.305 - 11947.717: 66.9123% ( 111) 00:09:25.194 11947.717 - 11998.129: 67.5604% ( 73) 00:09:25.194 11998.129 - 12048.542: 68.2972% ( 83) 00:09:25.194 12048.542 - 12098.954: 68.9009% ( 68) 00:09:25.194 12098.954 - 12149.366: 69.4869% ( 66) 00:09:25.194 12149.366 - 12199.778: 70.0373% ( 62) 00:09:25.194 12199.778 - 12250.191: 70.8896% ( 96) 00:09:25.194 12250.191 - 12300.603: 71.5376% ( 73) 00:09:25.194 12300.603 - 12351.015: 72.0792% ( 61) 00:09:25.194 12351.015 - 12401.428: 72.6740% ( 67) 00:09:25.194 12401.428 - 12451.840: 73.3931% ( 81) 00:09:25.194 12451.840 - 12502.252: 74.1211% ( 82) 00:09:25.194 12502.252 - 12552.665: 74.7603% ( 72) 00:09:25.194 12552.665 - 12603.077: 75.3906% ( 71) 00:09:25.194 12603.077 - 12653.489: 76.1009% ( 80) 00:09:25.194 12653.489 - 12703.902: 76.8022% ( 79) 00:09:25.194 12703.902 - 12754.314: 77.3615% ( 63) 00:09:25.194 12754.314 - 12804.726: 77.9563% ( 67) 00:09:25.194 12804.726 - 12855.138: 78.4890% ( 60) 00:09:25.194 12855.138 - 12905.551: 79.1282% ( 72) 00:09:25.194 12905.551 - 13006.375: 80.4155% ( 145) 00:09:25.194 13006.375 - 13107.200: 81.6229% ( 136) 00:09:25.194 13107.200 - 13208.025: 82.8658% ( 140) 00:09:25.194 13208.025 - 13308.849: 83.9666% ( 124) 00:09:25.194 13308.849 - 13409.674: 85.2894% ( 149) 00:09:25.194 13409.674 - 13510.498: 86.3725% ( 122) 00:09:25.194 13510.498 - 13611.323: 87.4112% ( 117) 00:09:25.194 13611.323 - 13712.148: 88.3700% ( 108) 00:09:25.194 13712.148 - 13812.972: 89.1690% ( 90) 00:09:25.194 13812.972 - 13913.797: 89.8793% ( 80) 00:09:25.194 13913.797 - 14014.622: 90.8381% ( 108) 00:09:25.194 14014.622 - 14115.446: 91.4595% ( 70) 00:09:25.194 14115.446 - 14216.271: 91.9656% ( 57) 00:09:25.194 14216.271 - 14317.095: 92.4982% ( 60) 00:09:25.194 14317.095 - 14417.920: 92.9865% ( 55) 00:09:25.194 14417.920 - 14518.745: 93.5902% ( 68) 00:09:25.194 14518.745 - 14619.569: 94.1317% ( 61) 00:09:25.194 14619.569 - 14720.394: 94.6555% ( 59) 00:09:25.194 14720.394 - 14821.218: 95.0906% ( 49) 00:09:25.194 14821.218 - 14922.043: 95.6499% ( 63) 00:09:25.194 14922.043 - 15022.868: 96.0316% ( 43) 00:09:25.194 15022.868 - 15123.692: 96.2979% ( 30) 00:09:25.194 15123.692 - 15224.517: 96.5465% ( 28) 00:09:25.194 15224.517 - 15325.342: 96.7685% ( 25) 00:09:25.194 15325.342 - 15426.166: 96.9016% ( 15) 00:09:25.194 15426.166 - 15526.991: 97.0437% ( 16) 00:09:25.194 15526.991 - 15627.815: 97.0881% ( 5) 00:09:25.194 15627.815 - 15728.640: 97.1413% ( 6) 00:09:25.194 15728.640 - 15829.465: 97.2301% ( 10) 00:09:25.194 15829.465 - 15930.289: 97.4077% ( 20) 00:09:25.194 15930.289 - 16031.114: 97.5852% ( 20) 00:09:25.194 16031.114 - 16131.938: 97.7273% ( 16) 00:09:25.194 16131.938 - 16232.763: 97.8249% ( 11) 00:09:25.194 16232.763 - 16333.588: 97.9048% ( 9) 00:09:25.194 16333.588 - 16434.412: 97.9847% ( 9) 00:09:25.194 16434.412 - 16535.237: 98.0558% ( 8) 00:09:25.194 16535.237 - 16636.062: 98.1712% ( 13) 00:09:25.194 16636.062 - 16736.886: 98.2156% ( 5) 00:09:25.194 16736.886 - 16837.711: 98.2599% ( 5) 00:09:25.194 16837.711 - 16938.535: 98.2955% ( 4) 00:09:25.194 16938.535 - 17039.360: 98.3310% ( 4) 00:09:25.194 17039.360 - 17140.185: 98.3931% ( 7) 00:09:25.194 17140.185 - 17241.009: 98.4286% ( 4) 00:09:25.194 17241.009 - 17341.834: 98.4730% ( 5) 00:09:25.194 17341.834 - 17442.658: 98.5085% ( 4) 00:09:25.194 17442.658 - 17543.483: 98.5618% ( 6) 00:09:25.194 17543.483 - 17644.308: 98.6062% ( 5) 00:09:25.194 17644.308 - 17745.132: 98.6683% ( 7) 00:09:25.194 17745.132 - 17845.957: 98.7038% ( 4) 00:09:25.194 17845.957 - 17946.782: 98.7393% ( 4) 00:09:25.194 17946.782 - 18047.606: 98.7749% ( 4) 00:09:25.194 18047.606 - 18148.431: 98.8192% ( 5) 00:09:25.194 18148.431 - 18249.255: 98.8636% ( 5) 00:09:25.194 19156.677 - 19257.502: 98.8725% ( 1) 00:09:25.194 19257.502 - 19358.326: 98.9080% ( 4) 00:09:25.195 19358.326 - 19459.151: 98.9524% ( 5) 00:09:25.195 19459.151 - 19559.975: 99.0057% ( 6) 00:09:25.195 19559.975 - 19660.800: 99.0323% ( 3) 00:09:25.195 19660.800 - 19761.625: 99.0767% ( 5) 00:09:25.195 19761.625 - 19862.449: 99.1122% ( 4) 00:09:25.195 19862.449 - 19963.274: 99.1566% ( 5) 00:09:25.195 19963.274 - 20064.098: 99.2010% ( 5) 00:09:25.195 20064.098 - 20164.923: 99.2365% ( 4) 00:09:25.195 20164.923 - 20265.748: 99.2720% ( 4) 00:09:25.195 20265.748 - 20366.572: 99.3253% ( 6) 00:09:25.195 20366.572 - 20467.397: 99.3608% ( 4) 00:09:25.195 20467.397 - 20568.222: 99.4052% ( 5) 00:09:25.195 20568.222 - 20669.046: 99.4318% ( 3) 00:09:25.195 20669.046 - 20769.871: 99.4673% ( 4) 00:09:25.195 20769.871 - 20870.695: 99.5206% ( 6) 00:09:25.195 20870.695 - 20971.520: 99.5295% ( 1) 00:09:25.195 20971.520 - 21072.345: 99.5739% ( 5) 00:09:25.195 21072.345 - 21173.169: 99.6183% ( 5) 00:09:25.195 21173.169 - 21273.994: 99.6449% ( 3) 00:09:25.195 21273.994 - 21374.818: 99.6893% ( 5) 00:09:25.195 21374.818 - 21475.643: 99.7248% ( 4) 00:09:25.195 21475.643 - 21576.468: 99.7692% ( 5) 00:09:25.195 21576.468 - 21677.292: 99.8047% ( 4) 00:09:25.195 21677.292 - 21778.117: 99.8402% ( 4) 00:09:25.195 21778.117 - 21878.942: 99.8757% ( 4) 00:09:25.195 21878.942 - 21979.766: 99.9201% ( 5) 00:09:25.195 21979.766 - 22080.591: 99.9290% ( 1) 00:09:25.195 22080.591 - 22181.415: 99.9822% ( 6) 00:09:25.195 22181.415 - 22282.240: 100.0000% ( 2) 00:09:25.195 00:09:25.195 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:25.195 ============================================================================== 00:09:25.195 Range in us Cumulative IO count 00:09:25.195 7461.022 - 7511.434: 0.0089% ( 1) 00:09:25.195 7713.083 - 7763.495: 0.0266% ( 2) 00:09:25.195 7763.495 - 7813.908: 0.0888% ( 7) 00:09:25.195 7813.908 - 7864.320: 0.2219% ( 15) 00:09:25.195 7864.320 - 7914.732: 0.3640% ( 16) 00:09:25.195 7914.732 - 7965.145: 0.6214% ( 29) 00:09:25.195 7965.145 - 8015.557: 0.8612% ( 27) 00:09:25.195 8015.557 - 8065.969: 1.2429% ( 43) 00:09:25.195 8065.969 - 8116.382: 2.0419% ( 90) 00:09:25.195 8116.382 - 8166.794: 2.2461% ( 23) 00:09:25.195 8166.794 - 8217.206: 2.3260% ( 9) 00:09:25.195 8217.206 - 8267.618: 2.4414% ( 13) 00:09:25.195 8267.618 - 8318.031: 2.6634% ( 25) 00:09:25.195 8318.031 - 8368.443: 2.8942% ( 26) 00:09:25.195 8368.443 - 8418.855: 3.1783% ( 32) 00:09:25.195 8418.855 - 8469.268: 3.4624% ( 32) 00:09:25.195 8469.268 - 8519.680: 3.7731% ( 35) 00:09:25.195 8519.680 - 8570.092: 3.9418% ( 19) 00:09:25.195 8570.092 - 8620.505: 4.0927% ( 17) 00:09:25.195 8620.505 - 8670.917: 4.2436% ( 17) 00:09:25.195 8670.917 - 8721.329: 4.3945% ( 17) 00:09:25.195 8721.329 - 8771.742: 5.0781% ( 77) 00:09:25.195 8771.742 - 8822.154: 5.2557% ( 20) 00:09:25.195 8822.154 - 8872.566: 5.5043% ( 28) 00:09:25.195 8872.566 - 8922.978: 5.7262% ( 25) 00:09:25.195 8922.978 - 8973.391: 5.9126% ( 21) 00:09:25.195 8973.391 - 9023.803: 6.1612% ( 28) 00:09:25.195 9023.803 - 9074.215: 6.4276% ( 30) 00:09:25.195 9074.215 - 9124.628: 6.7827% ( 40) 00:09:25.195 9124.628 - 9175.040: 7.1733% ( 44) 00:09:25.195 9175.040 - 9225.452: 7.6438% ( 53) 00:09:25.195 9225.452 - 9275.865: 8.0788% ( 49) 00:09:25.195 9275.865 - 9326.277: 8.7180% ( 72) 00:09:25.195 9326.277 - 9376.689: 9.3928% ( 76) 00:09:25.195 9376.689 - 9427.102: 10.1030% ( 80) 00:09:25.195 9427.102 - 9477.514: 10.8487% ( 84) 00:09:25.195 9477.514 - 9527.926: 11.7898% ( 106) 00:09:25.195 9527.926 - 9578.338: 12.7397% ( 107) 00:09:25.195 9578.338 - 9628.751: 13.7962% ( 119) 00:09:25.195 9628.751 - 9679.163: 15.0124% ( 137) 00:09:25.195 9679.163 - 9729.575: 16.0511% ( 117) 00:09:25.195 9729.575 - 9779.988: 17.3207% ( 143) 00:09:25.195 9779.988 - 9830.400: 18.6790% ( 153) 00:09:25.195 9830.400 - 9880.812: 19.9308% ( 141) 00:09:25.195 9880.812 - 9931.225: 21.1204% ( 134) 00:09:25.195 9931.225 - 9981.637: 22.6119% ( 168) 00:09:25.195 9981.637 - 10032.049: 24.3430% ( 195) 00:09:25.195 10032.049 - 10082.462: 25.9144% ( 177) 00:09:25.195 10082.462 - 10132.874: 27.3438% ( 161) 00:09:25.195 10132.874 - 10183.286: 28.7553% ( 159) 00:09:25.195 10183.286 - 10233.698: 30.2823% ( 172) 00:09:25.195 10233.698 - 10284.111: 32.0312% ( 197) 00:09:25.195 10284.111 - 10334.523: 34.0554% ( 228) 00:09:25.195 10334.523 - 10384.935: 35.7067% ( 186) 00:09:25.195 10384.935 - 10435.348: 37.3136% ( 181) 00:09:25.195 10435.348 - 10485.760: 38.7340% ( 160) 00:09:25.195 10485.760 - 10536.172: 40.0923% ( 153) 00:09:25.195 10536.172 - 10586.585: 41.3707% ( 144) 00:09:25.195 10586.585 - 10636.997: 42.7734% ( 158) 00:09:25.195 10636.997 - 10687.409: 44.1406% ( 154) 00:09:25.195 10687.409 - 10737.822: 45.5078% ( 154) 00:09:25.195 10737.822 - 10788.234: 46.7951% ( 145) 00:09:25.195 10788.234 - 10838.646: 47.8871% ( 123) 00:09:25.195 10838.646 - 10889.058: 48.9080% ( 115) 00:09:25.195 10889.058 - 10939.471: 49.9290% ( 115) 00:09:25.195 10939.471 - 10989.883: 50.9677% ( 117) 00:09:25.195 10989.883 - 11040.295: 51.8999% ( 105) 00:09:25.195 11040.295 - 11090.708: 52.7521% ( 96) 00:09:25.195 11090.708 - 11141.120: 53.5423% ( 89) 00:09:25.195 11141.120 - 11191.532: 54.2880% ( 84) 00:09:25.195 11191.532 - 11241.945: 55.1580% ( 98) 00:09:25.195 11241.945 - 11292.357: 55.9748% ( 92) 00:09:25.195 11292.357 - 11342.769: 56.9691% ( 112) 00:09:25.195 11342.769 - 11393.182: 57.9901% ( 115) 00:09:25.195 11393.182 - 11443.594: 58.9222% ( 105) 00:09:25.195 11443.594 - 11494.006: 59.8633% ( 106) 00:09:25.195 11494.006 - 11544.418: 60.7688% ( 102) 00:09:25.195 11544.418 - 11594.831: 61.6566% ( 100) 00:09:25.195 11594.831 - 11645.243: 62.5089% ( 96) 00:09:25.195 11645.243 - 11695.655: 63.2546% ( 84) 00:09:25.195 11695.655 - 11746.068: 63.9648% ( 80) 00:09:25.195 11746.068 - 11796.480: 64.6484% ( 77) 00:09:25.195 11796.480 - 11846.892: 65.4563% ( 91) 00:09:25.195 11846.892 - 11897.305: 66.2997% ( 95) 00:09:25.195 11897.305 - 11947.717: 67.0898% ( 89) 00:09:25.195 11947.717 - 11998.129: 67.9155% ( 93) 00:09:25.195 11998.129 - 12048.542: 69.0874% ( 132) 00:09:25.195 12048.542 - 12098.954: 70.0195% ( 105) 00:09:25.195 12098.954 - 12149.366: 71.1470% ( 127) 00:09:25.195 12149.366 - 12199.778: 72.0170% ( 98) 00:09:25.195 12199.778 - 12250.191: 72.8072% ( 89) 00:09:25.195 12250.191 - 12300.603: 73.4996% ( 78) 00:09:25.195 12300.603 - 12351.015: 74.2809% ( 88) 00:09:25.195 12351.015 - 12401.428: 75.0444% ( 86) 00:09:25.195 12401.428 - 12451.840: 75.7990% ( 85) 00:09:25.195 12451.840 - 12502.252: 76.8111% ( 114) 00:09:25.195 12502.252 - 12552.665: 77.5213% ( 80) 00:09:25.195 12552.665 - 12603.077: 78.2404% ( 81) 00:09:25.195 12603.077 - 12653.489: 79.1016% ( 97) 00:09:25.195 12653.489 - 12703.902: 79.8029% ( 79) 00:09:25.195 12703.902 - 12754.314: 80.4510% ( 73) 00:09:25.195 12754.314 - 12804.726: 81.0369% ( 66) 00:09:25.195 12804.726 - 12855.138: 81.5607% ( 59) 00:09:25.195 12855.138 - 12905.551: 82.1289% ( 64) 00:09:25.195 12905.551 - 13006.375: 83.1410% ( 114) 00:09:25.195 13006.375 - 13107.200: 84.0998% ( 108) 00:09:25.195 13107.200 - 13208.025: 85.0763% ( 110) 00:09:25.195 13208.025 - 13308.849: 85.9109% ( 94) 00:09:25.195 13308.849 - 13409.674: 86.6832% ( 87) 00:09:25.195 13409.674 - 13510.498: 87.3757% ( 78) 00:09:25.195 13510.498 - 13611.323: 88.1925% ( 92) 00:09:25.195 13611.323 - 13712.148: 88.8228% ( 71) 00:09:25.195 13712.148 - 13812.972: 89.4354% ( 69) 00:09:25.195 13812.972 - 13913.797: 90.0036% ( 64) 00:09:25.195 13913.797 - 14014.622: 90.4741% ( 53) 00:09:25.195 14014.622 - 14115.446: 90.9002% ( 48) 00:09:25.195 14115.446 - 14216.271: 91.3263% ( 48) 00:09:25.195 14216.271 - 14317.095: 91.7170% ( 44) 00:09:25.195 14317.095 - 14417.920: 92.0721% ( 40) 00:09:25.195 14417.920 - 14518.745: 92.4538% ( 43) 00:09:25.195 14518.745 - 14619.569: 92.8001% ( 39) 00:09:25.195 14619.569 - 14720.394: 93.1374% ( 38) 00:09:25.195 14720.394 - 14821.218: 93.4837% ( 39) 00:09:25.195 14821.218 - 14922.043: 93.8388% ( 40) 00:09:25.195 14922.043 - 15022.868: 94.1406% ( 34) 00:09:25.195 15022.868 - 15123.692: 94.4158% ( 31) 00:09:25.195 15123.692 - 15224.517: 94.6378% ( 25) 00:09:25.195 15224.517 - 15325.342: 94.9574% ( 36) 00:09:25.195 15325.342 - 15426.166: 95.2326% ( 31) 00:09:25.195 15426.166 - 15526.991: 95.6055% ( 42) 00:09:25.195 15526.991 - 15627.815: 96.0760% ( 53) 00:09:25.195 15627.815 - 15728.640: 96.3601% ( 32) 00:09:25.195 15728.640 - 15829.465: 96.5732% ( 24) 00:09:25.195 15829.465 - 15930.289: 96.7507% ( 20) 00:09:25.195 15930.289 - 16031.114: 96.9016% ( 17) 00:09:25.195 16031.114 - 16131.938: 97.0703% ( 19) 00:09:25.195 16131.938 - 16232.763: 97.3544% ( 32) 00:09:25.195 16232.763 - 16333.588: 97.5675% ( 24) 00:09:25.195 16333.588 - 16434.412: 97.7006% ( 15) 00:09:25.195 16434.412 - 16535.237: 97.8427% ( 16) 00:09:25.195 16535.237 - 16636.062: 97.9759% ( 15) 00:09:25.195 16636.062 - 16736.886: 98.0824% ( 12) 00:09:25.195 16736.886 - 16837.711: 98.1357% ( 6) 00:09:25.195 16837.711 - 16938.535: 98.2067% ( 8) 00:09:25.195 16938.535 - 17039.360: 98.2688% ( 7) 00:09:25.196 17039.360 - 17140.185: 98.3576% ( 10) 00:09:25.196 17140.185 - 17241.009: 98.4109% ( 6) 00:09:25.196 17241.009 - 17341.834: 98.4641% ( 6) 00:09:25.196 17341.834 - 17442.658: 98.5174% ( 6) 00:09:25.196 17442.658 - 17543.483: 98.5884% ( 8) 00:09:25.196 17543.483 - 17644.308: 98.6683% ( 9) 00:09:25.196 17644.308 - 17745.132: 98.7216% ( 6) 00:09:25.196 17745.132 - 17845.957: 98.7749% ( 6) 00:09:25.196 17845.957 - 17946.782: 98.8192% ( 5) 00:09:25.196 17946.782 - 18047.606: 98.8636% ( 5) 00:09:25.196 20265.748 - 20366.572: 98.9169% ( 6) 00:09:25.196 20366.572 - 20467.397: 98.9879% ( 8) 00:09:25.196 20467.397 - 20568.222: 99.0501% ( 7) 00:09:25.196 20568.222 - 20669.046: 99.0945% ( 5) 00:09:25.196 20669.046 - 20769.871: 99.1655% ( 8) 00:09:25.196 20769.871 - 20870.695: 99.2099% ( 5) 00:09:25.196 20870.695 - 20971.520: 99.2809% ( 8) 00:09:25.196 20971.520 - 21072.345: 99.3430% ( 7) 00:09:25.196 21072.345 - 21173.169: 99.4229% ( 9) 00:09:25.196 21173.169 - 21273.994: 99.5827% ( 18) 00:09:25.196 21273.994 - 21374.818: 99.6626% ( 9) 00:09:25.196 21374.818 - 21475.643: 99.7248% ( 7) 00:09:25.196 21475.643 - 21576.468: 99.7781% ( 6) 00:09:25.196 21576.468 - 21677.292: 99.8136% ( 4) 00:09:25.196 21677.292 - 21778.117: 99.8491% ( 4) 00:09:25.196 21778.117 - 21878.942: 99.8935% ( 5) 00:09:25.196 21878.942 - 21979.766: 99.9290% ( 4) 00:09:25.196 21979.766 - 22080.591: 99.9645% ( 4) 00:09:25.196 22080.591 - 22181.415: 100.0000% ( 4) 00:09:25.196 00:09:25.196 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:25.196 ============================================================================== 00:09:25.196 Range in us Cumulative IO count 00:09:25.196 7057.723 - 7108.135: 0.0178% ( 2) 00:09:25.196 7108.135 - 7158.548: 0.0533% ( 4) 00:09:25.196 7158.548 - 7208.960: 0.0977% ( 5) 00:09:25.196 7208.960 - 7259.372: 0.1332% ( 4) 00:09:25.196 7259.372 - 7309.785: 0.1687% ( 4) 00:09:25.196 7309.785 - 7360.197: 0.2042% ( 4) 00:09:25.196 7360.197 - 7410.609: 0.2486% ( 5) 00:09:25.196 7410.609 - 7461.022: 0.3640% ( 13) 00:09:25.196 7461.022 - 7511.434: 0.5948% ( 26) 00:09:25.196 7511.434 - 7561.846: 0.7191% ( 14) 00:09:25.196 7561.846 - 7612.258: 0.9144% ( 22) 00:09:25.196 7612.258 - 7662.671: 1.0653% ( 17) 00:09:25.196 7662.671 - 7713.083: 1.1009% ( 4) 00:09:25.196 7713.083 - 7763.495: 1.1275% ( 3) 00:09:25.196 7763.495 - 7813.908: 1.1364% ( 1) 00:09:25.196 7965.145 - 8015.557: 1.1452% ( 1) 00:09:25.196 8015.557 - 8065.969: 1.1541% ( 1) 00:09:25.196 8065.969 - 8116.382: 1.1630% ( 1) 00:09:25.196 8116.382 - 8166.794: 1.1896% ( 3) 00:09:25.196 8166.794 - 8217.206: 1.2429% ( 6) 00:09:25.196 8217.206 - 8267.618: 1.3228% ( 9) 00:09:25.196 8267.618 - 8318.031: 1.3849% ( 7) 00:09:25.196 8318.031 - 8368.443: 1.4293% ( 5) 00:09:25.196 8368.443 - 8418.855: 1.4915% ( 7) 00:09:25.196 8418.855 - 8469.268: 1.5980% ( 12) 00:09:25.196 8469.268 - 8519.680: 1.6779% ( 9) 00:09:25.196 8519.680 - 8570.092: 1.8288% ( 17) 00:09:25.196 8570.092 - 8620.505: 1.9709% ( 16) 00:09:25.196 8620.505 - 8670.917: 2.1484% ( 20) 00:09:25.196 8670.917 - 8721.329: 2.3349% ( 21) 00:09:25.196 8721.329 - 8771.742: 2.5479% ( 24) 00:09:25.196 8771.742 - 8822.154: 2.7965% ( 28) 00:09:25.196 8822.154 - 8872.566: 3.3381% ( 61) 00:09:25.196 8872.566 - 8922.978: 3.7464% ( 46) 00:09:25.196 8922.978 - 8973.391: 4.1016% ( 40) 00:09:25.196 8973.391 - 9023.803: 4.3768% ( 31) 00:09:25.196 9023.803 - 9074.215: 4.6697% ( 33) 00:09:25.196 9074.215 - 9124.628: 5.0337% ( 41) 00:09:25.196 9124.628 - 9175.040: 5.4776% ( 50) 00:09:25.196 9175.040 - 9225.452: 6.0991% ( 70) 00:09:25.196 9225.452 - 9275.865: 6.7116% ( 69) 00:09:25.196 9275.865 - 9326.277: 7.3597% ( 73) 00:09:25.196 9326.277 - 9376.689: 8.0344% ( 76) 00:09:25.196 9376.689 - 9427.102: 9.1264% ( 123) 00:09:25.196 9427.102 - 9477.514: 10.3427% ( 137) 00:09:25.196 9477.514 - 9527.926: 11.1861% ( 95) 00:09:25.196 9527.926 - 9578.338: 12.1271% ( 106) 00:09:25.196 9578.338 - 9628.751: 13.0948% ( 109) 00:09:25.196 9628.751 - 9679.163: 14.3022% ( 136) 00:09:25.196 9679.163 - 9729.575: 15.5717% ( 143) 00:09:25.196 9729.575 - 9779.988: 16.9656% ( 157) 00:09:25.196 9779.988 - 9830.400: 18.2440% ( 144) 00:09:25.196 9830.400 - 9880.812: 19.5135% ( 143) 00:09:25.196 9880.812 - 9931.225: 21.4666% ( 220) 00:09:25.196 9931.225 - 9981.637: 23.4197% ( 220) 00:09:25.196 9981.637 - 10032.049: 25.1598% ( 196) 00:09:25.196 10032.049 - 10082.462: 26.7045% ( 174) 00:09:25.196 10082.462 - 10132.874: 28.2937% ( 179) 00:09:25.196 10132.874 - 10183.286: 30.0959% ( 203) 00:09:25.196 10183.286 - 10233.698: 31.7294% ( 184) 00:09:25.196 10233.698 - 10284.111: 33.3629% ( 184) 00:09:25.196 10284.111 - 10334.523: 35.1474% ( 201) 00:09:25.196 10334.523 - 10384.935: 36.8874% ( 196) 00:09:25.196 10384.935 - 10435.348: 38.6275% ( 196) 00:09:25.196 10435.348 - 10485.760: 40.5806% ( 220) 00:09:25.196 10485.760 - 10536.172: 42.1165% ( 173) 00:09:25.196 10536.172 - 10586.585: 43.4215% ( 147) 00:09:25.196 10586.585 - 10636.997: 44.7177% ( 146) 00:09:25.196 10636.997 - 10687.409: 46.0316% ( 148) 00:09:25.196 10687.409 - 10737.822: 47.1857% ( 130) 00:09:25.196 10737.822 - 10788.234: 48.4553% ( 143) 00:09:25.196 10788.234 - 10838.646: 49.5117% ( 119) 00:09:25.196 10838.646 - 10889.058: 50.5327% ( 115) 00:09:25.196 10889.058 - 10939.471: 51.5270% ( 112) 00:09:25.196 10939.471 - 10989.883: 52.4947% ( 109) 00:09:25.196 10989.883 - 11040.295: 53.4624% ( 109) 00:09:25.196 11040.295 - 11090.708: 54.4567% ( 112) 00:09:25.196 11090.708 - 11141.120: 55.4421% ( 111) 00:09:25.196 11141.120 - 11191.532: 56.5962% ( 130) 00:09:25.196 11191.532 - 11241.945: 57.9457% ( 152) 00:09:25.196 11241.945 - 11292.357: 58.9577% ( 114) 00:09:25.196 11292.357 - 11342.769: 60.0320% ( 121) 00:09:25.196 11342.769 - 11393.182: 61.0884% ( 119) 00:09:25.196 11393.182 - 11443.594: 62.0206% ( 105) 00:09:25.196 11443.594 - 11494.006: 62.8374% ( 92) 00:09:25.196 11494.006 - 11544.418: 63.7518% ( 103) 00:09:25.196 11544.418 - 11594.831: 64.5685% ( 92) 00:09:25.196 11594.831 - 11645.243: 65.3764% ( 91) 00:09:25.196 11645.243 - 11695.655: 66.2464% ( 98) 00:09:25.196 11695.655 - 11746.068: 66.9656% ( 81) 00:09:25.196 11746.068 - 11796.480: 67.6758% ( 80) 00:09:25.196 11796.480 - 11846.892: 68.2884% ( 69) 00:09:25.196 11846.892 - 11897.305: 69.2116% ( 104) 00:09:25.196 11897.305 - 11947.717: 69.9929% ( 88) 00:09:25.196 11947.717 - 11998.129: 70.7120% ( 81) 00:09:25.196 11998.129 - 12048.542: 71.3956% ( 77) 00:09:25.196 12048.542 - 12098.954: 72.1857% ( 89) 00:09:25.196 12098.954 - 12149.366: 73.0025% ( 92) 00:09:25.196 12149.366 - 12199.778: 73.7127% ( 80) 00:09:25.196 12199.778 - 12250.191: 74.4318% ( 81) 00:09:25.196 12250.191 - 12300.603: 75.2486% ( 92) 00:09:25.196 12300.603 - 12351.015: 76.0210% ( 87) 00:09:25.196 12351.015 - 12401.428: 76.7312% ( 80) 00:09:25.196 12401.428 - 12451.840: 77.4592% ( 82) 00:09:25.196 12451.840 - 12502.252: 78.1605% ( 79) 00:09:25.196 12502.252 - 12552.665: 78.9773% ( 92) 00:09:25.196 12552.665 - 12603.077: 79.7230% ( 84) 00:09:25.196 12603.077 - 12653.489: 80.4155% ( 78) 00:09:25.196 12653.489 - 12703.902: 81.0813% ( 75) 00:09:25.196 12703.902 - 12754.314: 81.8448% ( 86) 00:09:25.196 12754.314 - 12804.726: 82.4041% ( 63) 00:09:25.196 12804.726 - 12855.138: 82.9190% ( 58) 00:09:25.196 12855.138 - 12905.551: 83.4073% ( 55) 00:09:25.196 12905.551 - 13006.375: 84.2418% ( 94) 00:09:25.196 13006.375 - 13107.200: 85.0408% ( 90) 00:09:25.196 13107.200 - 13208.025: 85.6800% ( 72) 00:09:25.196 13208.025 - 13308.849: 86.3015% ( 70) 00:09:25.196 13308.849 - 13409.674: 86.7543% ( 51) 00:09:25.196 13409.674 - 13510.498: 87.2248% ( 53) 00:09:25.196 13510.498 - 13611.323: 87.6687% ( 50) 00:09:25.196 13611.323 - 13712.148: 88.0948% ( 48) 00:09:25.196 13712.148 - 13812.972: 88.4499% ( 40) 00:09:25.196 13812.972 - 13913.797: 88.7873% ( 38) 00:09:25.196 13913.797 - 14014.622: 89.3466% ( 63) 00:09:25.196 14014.622 - 14115.446: 89.6928% ( 39) 00:09:25.196 14115.446 - 14216.271: 90.0124% ( 36) 00:09:25.196 14216.271 - 14317.095: 90.3942% ( 43) 00:09:25.196 14317.095 - 14417.920: 90.8558% ( 52) 00:09:25.196 14417.920 - 14518.745: 91.4240% ( 64) 00:09:25.196 14518.745 - 14619.569: 91.8857% ( 52) 00:09:25.196 14619.569 - 14720.394: 92.3029% ( 47) 00:09:25.196 14720.394 - 14821.218: 92.6935% ( 44) 00:09:25.196 14821.218 - 14922.043: 93.0842% ( 44) 00:09:25.196 14922.043 - 15022.868: 93.4038% ( 36) 00:09:25.196 15022.868 - 15123.692: 93.7589% ( 40) 00:09:25.196 15123.692 - 15224.517: 94.2028% ( 50) 00:09:25.196 15224.517 - 15325.342: 94.6023% ( 45) 00:09:25.196 15325.342 - 15426.166: 94.9751% ( 42) 00:09:25.196 15426.166 - 15526.991: 95.3569% ( 43) 00:09:25.196 15526.991 - 15627.815: 95.7386% ( 43) 00:09:25.196 15627.815 - 15728.640: 96.0938% ( 40) 00:09:25.196 15728.640 - 15829.465: 96.3867% ( 33) 00:09:25.196 15829.465 - 15930.289: 96.6619% ( 31) 00:09:25.196 15930.289 - 16031.114: 96.9904% ( 37) 00:09:25.196 16031.114 - 16131.938: 97.2923% ( 34) 00:09:25.196 16131.938 - 16232.763: 97.4876% ( 22) 00:09:25.196 16232.763 - 16333.588: 97.7095% ( 25) 00:09:25.196 16333.588 - 16434.412: 97.9048% ( 22) 00:09:25.196 16434.412 - 16535.237: 98.0114% ( 12) 00:09:25.196 16535.237 - 16636.062: 98.1268% ( 13) 00:09:25.196 16636.062 - 16736.886: 98.2866% ( 18) 00:09:25.196 16736.886 - 16837.711: 98.4020% ( 13) 00:09:25.196 16837.711 - 16938.535: 98.5174% ( 13) 00:09:25.197 16938.535 - 17039.360: 98.5707% ( 6) 00:09:25.197 17039.360 - 17140.185: 98.6062% ( 4) 00:09:25.197 17140.185 - 17241.009: 98.6328% ( 3) 00:09:25.197 17241.009 - 17341.834: 98.6772% ( 5) 00:09:25.197 17341.834 - 17442.658: 98.7127% ( 4) 00:09:25.197 17442.658 - 17543.483: 98.7571% ( 5) 00:09:25.197 17543.483 - 17644.308: 98.8015% ( 5) 00:09:25.197 17644.308 - 17745.132: 98.8370% ( 4) 00:09:25.197 17745.132 - 17845.957: 98.8636% ( 3) 00:09:25.197 21979.766 - 22080.591: 98.8814% ( 2) 00:09:25.197 22080.591 - 22181.415: 98.9347% ( 6) 00:09:25.197 22181.415 - 22282.240: 98.9968% ( 7) 00:09:25.197 22282.240 - 22383.065: 99.0589% ( 7) 00:09:25.197 22383.065 - 22483.889: 99.1033% ( 5) 00:09:25.197 22483.889 - 22584.714: 99.1655% ( 7) 00:09:25.197 22584.714 - 22685.538: 99.2276% ( 7) 00:09:25.197 22685.538 - 22786.363: 99.2809% ( 6) 00:09:25.197 22786.363 - 22887.188: 99.3519% ( 8) 00:09:25.197 22887.188 - 22988.012: 99.5916% ( 27) 00:09:25.197 22988.012 - 23088.837: 99.6626% ( 8) 00:09:25.197 23088.837 - 23189.662: 99.7070% ( 5) 00:09:25.197 23189.662 - 23290.486: 99.7514% ( 5) 00:09:25.197 23290.486 - 23391.311: 99.7958% ( 5) 00:09:25.197 23391.311 - 23492.135: 99.8402% ( 5) 00:09:25.197 23492.135 - 23592.960: 99.8935% ( 6) 00:09:25.197 23592.960 - 23693.785: 99.9379% ( 5) 00:09:25.197 23693.785 - 23794.609: 99.9822% ( 5) 00:09:25.197 23794.609 - 23895.434: 100.0000% ( 2) 00:09:25.197 00:09:25.197 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:25.197 ============================================================================== 00:09:25.197 Range in us Cumulative IO count 00:09:25.197 5948.652 - 5973.858: 0.0266% ( 3) 00:09:25.197 5973.858 - 5999.065: 0.0533% ( 3) 00:09:25.197 5999.065 - 6024.271: 0.0888% ( 4) 00:09:25.197 6024.271 - 6049.477: 0.1065% ( 2) 00:09:25.197 6049.477 - 6074.683: 0.1243% ( 2) 00:09:25.197 6074.683 - 6099.889: 0.1509% ( 3) 00:09:25.197 6099.889 - 6125.095: 0.1953% ( 5) 00:09:25.197 6125.095 - 6150.302: 0.2219% ( 3) 00:09:25.197 6150.302 - 6175.508: 0.2486% ( 3) 00:09:25.197 6175.508 - 6200.714: 0.2930% ( 5) 00:09:25.197 6200.714 - 6225.920: 0.3995% ( 12) 00:09:25.197 6225.920 - 6251.126: 0.5149% ( 13) 00:09:25.197 6251.126 - 6276.332: 0.5859% ( 8) 00:09:25.197 6276.332 - 6301.538: 0.6037% ( 2) 00:09:25.197 6301.538 - 6326.745: 0.6214% ( 2) 00:09:25.197 6326.745 - 6351.951: 0.6481% ( 3) 00:09:25.197 6351.951 - 6377.157: 0.6658% ( 2) 00:09:25.197 6377.157 - 6402.363: 0.6836% ( 2) 00:09:25.197 6402.363 - 6427.569: 0.7013% ( 2) 00:09:25.197 6427.569 - 6452.775: 0.7191% ( 2) 00:09:25.197 6452.775 - 6503.188: 0.7635% ( 5) 00:09:25.197 6503.188 - 6553.600: 0.7901% ( 3) 00:09:25.197 6553.600 - 6604.012: 0.8256% ( 4) 00:09:25.197 6604.012 - 6654.425: 0.8612% ( 4) 00:09:25.197 6654.425 - 6704.837: 0.8967% ( 4) 00:09:25.197 6704.837 - 6755.249: 0.9411% ( 5) 00:09:25.197 6755.249 - 6805.662: 0.9588% ( 2) 00:09:25.197 6805.662 - 6856.074: 0.9677% ( 1) 00:09:25.197 6856.074 - 6906.486: 1.0121% ( 5) 00:09:25.197 6906.486 - 6956.898: 1.0742% ( 7) 00:09:25.197 6956.898 - 7007.311: 1.1186% ( 5) 00:09:25.197 7007.311 - 7057.723: 1.1364% ( 2) 00:09:25.197 7914.732 - 7965.145: 1.1452% ( 1) 00:09:25.197 8116.382 - 8166.794: 1.1719% ( 3) 00:09:25.197 8166.794 - 8217.206: 1.2251% ( 6) 00:09:25.197 8217.206 - 8267.618: 1.2784% ( 6) 00:09:25.197 8267.618 - 8318.031: 1.3494% ( 8) 00:09:25.197 8318.031 - 8368.443: 1.4382% ( 10) 00:09:25.197 8368.443 - 8418.855: 1.5359% ( 11) 00:09:25.197 8418.855 - 8469.268: 1.6335% ( 11) 00:09:25.197 8469.268 - 8519.680: 1.7312% ( 11) 00:09:25.197 8519.680 - 8570.092: 1.8200% ( 10) 00:09:25.197 8570.092 - 8620.505: 1.8999% ( 9) 00:09:25.197 8620.505 - 8670.917: 2.0153% ( 13) 00:09:25.197 8670.917 - 8721.329: 2.1573% ( 16) 00:09:25.197 8721.329 - 8771.742: 2.3704% ( 24) 00:09:25.197 8771.742 - 8822.154: 2.7433% ( 42) 00:09:25.197 8822.154 - 8872.566: 3.0629% ( 36) 00:09:25.197 8872.566 - 8922.978: 3.4446% ( 43) 00:09:25.197 8922.978 - 8973.391: 4.2702% ( 93) 00:09:25.197 8973.391 - 9023.803: 4.7763% ( 57) 00:09:25.197 9023.803 - 9074.215: 5.1491% ( 42) 00:09:25.197 9074.215 - 9124.628: 5.5398% ( 44) 00:09:25.197 9124.628 - 9175.040: 5.9215% ( 43) 00:09:25.197 9175.040 - 9225.452: 6.4187% ( 56) 00:09:25.197 9225.452 - 9275.865: 6.9602% ( 61) 00:09:25.197 9275.865 - 9326.277: 7.6083% ( 73) 00:09:25.197 9326.277 - 9376.689: 8.3274% ( 81) 00:09:25.197 9376.689 - 9427.102: 8.9577% ( 71) 00:09:25.197 9427.102 - 9477.514: 9.7212% ( 86) 00:09:25.197 9477.514 - 9527.926: 10.6623% ( 106) 00:09:25.197 9527.926 - 9578.338: 11.7543% ( 123) 00:09:25.197 9578.338 - 9628.751: 12.8107% ( 119) 00:09:25.197 9628.751 - 9679.163: 13.9293% ( 126) 00:09:25.197 9679.163 - 9729.575: 15.2876% ( 153) 00:09:25.197 9729.575 - 9779.988: 16.5927% ( 147) 00:09:25.197 9779.988 - 9830.400: 17.8888% ( 146) 00:09:25.197 9830.400 - 9880.812: 19.4691% ( 178) 00:09:25.197 9880.812 - 9931.225: 21.1293% ( 187) 00:09:25.197 9931.225 - 9981.637: 22.5231% ( 157) 00:09:25.197 9981.637 - 10032.049: 24.1211% ( 180) 00:09:25.197 10032.049 - 10082.462: 25.9144% ( 202) 00:09:25.197 10082.462 - 10132.874: 27.5479% ( 184) 00:09:25.197 10132.874 - 10183.286: 29.2081% ( 187) 00:09:25.197 10183.286 - 10233.698: 30.7972% ( 179) 00:09:25.197 10233.698 - 10284.111: 32.1999% ( 158) 00:09:25.197 10284.111 - 10334.523: 33.7536% ( 175) 00:09:25.197 10334.523 - 10384.935: 35.3604% ( 181) 00:09:25.197 10384.935 - 10435.348: 36.8786% ( 171) 00:09:25.197 10435.348 - 10485.760: 38.2457% ( 154) 00:09:25.197 10485.760 - 10536.172: 39.7017% ( 164) 00:09:25.197 10536.172 - 10586.585: 41.0067% ( 147) 00:09:25.197 10586.585 - 10636.997: 42.2852% ( 144) 00:09:25.197 10636.997 - 10687.409: 43.6701% ( 156) 00:09:25.197 10687.409 - 10737.822: 44.7621% ( 123) 00:09:25.197 10737.822 - 10788.234: 45.9428% ( 133) 00:09:25.197 10788.234 - 10838.646: 46.9815% ( 117) 00:09:25.197 10838.646 - 10889.058: 48.2777% ( 146) 00:09:25.197 10889.058 - 10939.471: 49.2720% ( 112) 00:09:25.197 10939.471 - 10989.883: 50.1687% ( 101) 00:09:25.197 10989.883 - 11040.295: 51.1097% ( 106) 00:09:25.197 11040.295 - 11090.708: 52.1040% ( 112) 00:09:25.197 11090.708 - 11141.120: 53.0806% ( 110) 00:09:25.197 11141.120 - 11191.532: 54.0927% ( 114) 00:09:25.197 11191.532 - 11241.945: 55.3800% ( 145) 00:09:25.197 11241.945 - 11292.357: 56.5430% ( 131) 00:09:25.197 11292.357 - 11342.769: 57.7237% ( 133) 00:09:25.197 11342.769 - 11393.182: 58.7003% ( 110) 00:09:25.197 11393.182 - 11443.594: 59.6147% ( 103) 00:09:25.197 11443.594 - 11494.006: 60.5202% ( 102) 00:09:25.197 11494.006 - 11544.418: 61.4258% ( 102) 00:09:25.197 11544.418 - 11594.831: 62.3935% ( 109) 00:09:25.197 11594.831 - 11645.243: 63.4322% ( 117) 00:09:25.197 11645.243 - 11695.655: 64.5330% ( 124) 00:09:25.197 11695.655 - 11746.068: 65.5362% ( 113) 00:09:25.197 11746.068 - 11796.480: 66.6637% ( 127) 00:09:25.197 11796.480 - 11846.892: 67.8089% ( 129) 00:09:25.197 11846.892 - 11897.305: 68.9631% ( 130) 00:09:25.197 11897.305 - 11947.717: 69.9308% ( 109) 00:09:25.197 11947.717 - 11998.129: 70.8097% ( 99) 00:09:25.197 11998.129 - 12048.542: 71.7862% ( 110) 00:09:25.197 12048.542 - 12098.954: 72.8338% ( 118) 00:09:25.197 12098.954 - 12149.366: 73.8015% ( 109) 00:09:25.197 12149.366 - 12199.778: 74.6982% ( 101) 00:09:25.197 12199.778 - 12250.191: 75.6303% ( 105) 00:09:25.197 12250.191 - 12300.603: 76.5803% ( 107) 00:09:25.197 12300.603 - 12351.015: 77.3793% ( 90) 00:09:25.197 12351.015 - 12401.428: 78.1783% ( 90) 00:09:25.197 12401.428 - 12451.840: 78.9418% ( 86) 00:09:25.197 12451.840 - 12502.252: 79.6697% ( 82) 00:09:25.197 12502.252 - 12552.665: 80.2912% ( 70) 00:09:25.197 12552.665 - 12603.077: 80.8505% ( 63) 00:09:25.197 12603.077 - 12653.489: 81.4542% ( 68) 00:09:25.197 12653.489 - 12703.902: 82.0401% ( 66) 00:09:25.197 12703.902 - 12754.314: 82.6616% ( 70) 00:09:25.197 12754.314 - 12804.726: 83.3807% ( 81) 00:09:25.198 12804.726 - 12855.138: 83.9045% ( 59) 00:09:25.198 12855.138 - 12905.551: 84.3839% ( 54) 00:09:25.198 12905.551 - 13006.375: 85.1918% ( 91) 00:09:25.198 13006.375 - 13107.200: 85.9730% ( 88) 00:09:25.198 13107.200 - 13208.025: 86.6566% ( 77) 00:09:25.198 13208.025 - 13308.849: 87.3491% ( 78) 00:09:25.198 13308.849 - 13409.674: 87.9173% ( 64) 00:09:25.198 13409.674 - 13510.498: 88.4766% ( 63) 00:09:25.198 13510.498 - 13611.323: 89.1868% ( 80) 00:09:25.198 13611.323 - 13712.148: 89.6662% ( 54) 00:09:25.198 13712.148 - 13812.972: 90.0213% ( 40) 00:09:25.198 13812.972 - 13913.797: 90.3587% ( 38) 00:09:25.198 13913.797 - 14014.622: 90.6339% ( 31) 00:09:25.198 14014.622 - 14115.446: 90.9180% ( 32) 00:09:25.198 14115.446 - 14216.271: 91.2820% ( 41) 00:09:25.198 14216.271 - 14317.095: 91.6903% ( 46) 00:09:25.198 14317.095 - 14417.920: 92.0366% ( 39) 00:09:25.198 14417.920 - 14518.745: 92.3029% ( 30) 00:09:25.198 14518.745 - 14619.569: 92.5337% ( 26) 00:09:25.198 14619.569 - 14720.394: 92.7113% ( 20) 00:09:25.198 14720.394 - 14821.218: 92.8711% ( 18) 00:09:25.198 14821.218 - 14922.043: 93.1907% ( 36) 00:09:25.198 14922.043 - 15022.868: 93.5636% ( 42) 00:09:25.198 15022.868 - 15123.692: 93.9719% ( 46) 00:09:25.198 15123.692 - 15224.517: 94.3271% ( 40) 00:09:25.198 15224.517 - 15325.342: 94.6644% ( 38) 00:09:25.198 15325.342 - 15426.166: 95.0195% ( 40) 00:09:25.198 15426.166 - 15526.991: 95.3569% ( 38) 00:09:25.198 15526.991 - 15627.815: 95.7209% ( 41) 00:09:25.198 15627.815 - 15728.640: 96.0671% ( 39) 00:09:25.198 15728.640 - 15829.465: 96.3956% ( 37) 00:09:25.198 15829.465 - 15930.289: 96.6708% ( 31) 00:09:25.198 15930.289 - 16031.114: 96.9105% ( 27) 00:09:25.198 16031.114 - 16131.938: 97.1768% ( 30) 00:09:25.198 16131.938 - 16232.763: 97.4254% ( 28) 00:09:25.198 16232.763 - 16333.588: 97.6651% ( 27) 00:09:25.198 16333.588 - 16434.412: 97.9226% ( 29) 00:09:25.198 16434.412 - 16535.237: 98.1445% ( 25) 00:09:25.198 16535.237 - 16636.062: 98.3398% ( 22) 00:09:25.198 16636.062 - 16736.886: 98.5085% ( 19) 00:09:25.198 16736.886 - 16837.711: 98.6151% ( 12) 00:09:25.198 16837.711 - 16938.535: 98.7127% ( 11) 00:09:25.198 16938.535 - 17039.360: 98.7749% ( 7) 00:09:25.198 17039.360 - 17140.185: 98.8192% ( 5) 00:09:25.198 17140.185 - 17241.009: 98.8548% ( 4) 00:09:25.198 17241.009 - 17341.834: 98.8636% ( 1) 00:09:25.198 22282.240 - 22383.065: 98.9080% ( 5) 00:09:25.198 22383.065 - 22483.889: 98.9524% ( 5) 00:09:25.198 22483.889 - 22584.714: 99.0234% ( 8) 00:09:25.198 22584.714 - 22685.538: 99.0767% ( 6) 00:09:25.198 22685.538 - 22786.363: 99.1388% ( 7) 00:09:25.198 22786.363 - 22887.188: 99.2010% ( 7) 00:09:25.198 22887.188 - 22988.012: 99.2720% ( 8) 00:09:25.198 22988.012 - 23088.837: 99.5117% ( 27) 00:09:25.198 23088.837 - 23189.662: 99.5650% ( 6) 00:09:25.198 23189.662 - 23290.486: 99.6449% ( 9) 00:09:25.198 23290.486 - 23391.311: 99.7070% ( 7) 00:09:25.198 23391.311 - 23492.135: 99.7692% ( 7) 00:09:25.198 23492.135 - 23592.960: 99.8491% ( 9) 00:09:25.198 23592.960 - 23693.785: 99.8935% ( 5) 00:09:25.198 23693.785 - 23794.609: 99.9379% ( 5) 00:09:25.198 23794.609 - 23895.434: 99.9822% ( 5) 00:09:25.198 23895.434 - 23996.258: 100.0000% ( 2) 00:09:25.198 00:09:25.198 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:25.198 ============================================================================== 00:09:25.198 Range in us Cumulative IO count 00:09:25.198 5192.468 - 5217.674: 0.0089% ( 1) 00:09:25.198 5217.674 - 5242.880: 0.0444% ( 4) 00:09:25.198 5242.880 - 5268.086: 0.0799% ( 4) 00:09:25.198 5268.086 - 5293.292: 0.1065% ( 3) 00:09:25.198 5293.292 - 5318.498: 0.1420% ( 4) 00:09:25.198 5318.498 - 5343.705: 0.1687% ( 3) 00:09:25.198 5343.705 - 5368.911: 0.1864% ( 2) 00:09:25.198 5368.911 - 5394.117: 0.2219% ( 4) 00:09:25.198 5394.117 - 5419.323: 0.2575% ( 4) 00:09:25.198 5419.323 - 5444.529: 0.3018% ( 5) 00:09:25.198 5444.529 - 5469.735: 0.3551% ( 6) 00:09:25.198 5469.735 - 5494.942: 0.3817% ( 3) 00:09:25.198 5494.942 - 5520.148: 0.4173% ( 4) 00:09:25.198 5520.148 - 5545.354: 0.4705% ( 6) 00:09:25.198 5545.354 - 5570.560: 0.5504% ( 9) 00:09:25.198 5570.560 - 5595.766: 0.5859% ( 4) 00:09:25.198 5595.766 - 5620.972: 0.6037% ( 2) 00:09:25.198 5620.972 - 5646.178: 0.6214% ( 2) 00:09:25.198 5646.178 - 5671.385: 0.6481% ( 3) 00:09:25.198 5671.385 - 5696.591: 0.6658% ( 2) 00:09:25.198 5696.591 - 5721.797: 0.6836% ( 2) 00:09:25.198 5721.797 - 5747.003: 0.7013% ( 2) 00:09:25.198 5747.003 - 5772.209: 0.7191% ( 2) 00:09:25.198 5772.209 - 5797.415: 0.7369% ( 2) 00:09:25.198 5797.415 - 5822.622: 0.7546% ( 2) 00:09:25.198 5822.622 - 5847.828: 0.7724% ( 2) 00:09:25.198 5847.828 - 5873.034: 0.7901% ( 2) 00:09:25.198 5873.034 - 5898.240: 0.8079% ( 2) 00:09:25.198 5898.240 - 5923.446: 0.8256% ( 2) 00:09:25.198 5923.446 - 5948.652: 0.8434% ( 2) 00:09:25.198 5948.652 - 5973.858: 0.8612% ( 2) 00:09:25.198 5973.858 - 5999.065: 0.8789% ( 2) 00:09:25.198 5999.065 - 6024.271: 0.8967% ( 2) 00:09:25.198 6024.271 - 6049.477: 0.9144% ( 2) 00:09:25.198 6049.477 - 6074.683: 0.9322% ( 2) 00:09:25.198 6074.683 - 6099.889: 0.9499% ( 2) 00:09:25.198 6099.889 - 6125.095: 0.9766% ( 3) 00:09:25.198 6125.095 - 6150.302: 0.9943% ( 2) 00:09:25.198 6150.302 - 6175.508: 1.0032% ( 1) 00:09:25.198 6175.508 - 6200.714: 1.0210% ( 2) 00:09:25.198 6200.714 - 6225.920: 1.0387% ( 2) 00:09:25.198 6225.920 - 6251.126: 1.0565% ( 2) 00:09:25.198 6251.126 - 6276.332: 1.0742% ( 2) 00:09:25.198 6276.332 - 6301.538: 1.1009% ( 3) 00:09:25.198 6301.538 - 6326.745: 1.1186% ( 2) 00:09:25.198 6326.745 - 6351.951: 1.1275% ( 1) 00:09:25.198 6351.951 - 6377.157: 1.1364% ( 1) 00:09:25.198 7158.548 - 7208.960: 1.1719% ( 4) 00:09:25.198 7208.960 - 7259.372: 1.1985% ( 3) 00:09:25.198 7259.372 - 7309.785: 1.2429% ( 5) 00:09:25.198 7309.785 - 7360.197: 1.3050% ( 7) 00:09:25.198 7360.197 - 7410.609: 1.3494% ( 5) 00:09:25.198 7410.609 - 7461.022: 1.3938% ( 5) 00:09:25.198 7461.022 - 7511.434: 1.4382% ( 5) 00:09:25.198 7511.434 - 7561.846: 1.4737% ( 4) 00:09:25.198 7561.846 - 7612.258: 1.5270% ( 6) 00:09:25.198 7612.258 - 7662.671: 1.5714% ( 5) 00:09:25.198 7662.671 - 7713.083: 1.6424% ( 8) 00:09:25.198 7713.083 - 7763.495: 1.7844% ( 16) 00:09:25.198 7763.495 - 7813.908: 1.9531% ( 19) 00:09:25.198 7813.908 - 7864.320: 2.0419% ( 10) 00:09:25.198 7864.320 - 7914.732: 2.1218% ( 9) 00:09:25.198 7914.732 - 7965.145: 2.1928% ( 8) 00:09:25.198 7965.145 - 8015.557: 2.2638% ( 8) 00:09:25.198 8015.557 - 8065.969: 2.3260% ( 7) 00:09:25.198 8065.969 - 8116.382: 2.3970% ( 8) 00:09:25.198 8116.382 - 8166.794: 2.4503% ( 6) 00:09:25.198 8166.794 - 8217.206: 2.5302% ( 9) 00:09:25.198 8217.206 - 8267.618: 2.6811% ( 17) 00:09:25.198 8267.618 - 8318.031: 2.8942% ( 24) 00:09:25.198 8318.031 - 8368.443: 3.0273% ( 15) 00:09:25.198 8368.443 - 8418.855: 3.1339% ( 12) 00:09:25.198 8418.855 - 8469.268: 3.2315% ( 11) 00:09:25.198 8469.268 - 8519.680: 3.3114% ( 9) 00:09:25.198 8519.680 - 8570.092: 3.4002% ( 10) 00:09:25.198 8570.092 - 8620.505: 3.4979% ( 11) 00:09:25.198 8620.505 - 8670.917: 3.5866% ( 10) 00:09:25.198 8670.917 - 8721.329: 3.7464% ( 18) 00:09:25.198 8721.329 - 8771.742: 3.8974% ( 17) 00:09:25.198 8771.742 - 8822.154: 4.0039% ( 12) 00:09:25.198 8822.154 - 8872.566: 4.2170% ( 24) 00:09:25.198 8872.566 - 8922.978: 4.4478% ( 26) 00:09:25.198 8922.978 - 8973.391: 4.9006% ( 51) 00:09:25.198 8973.391 - 9023.803: 5.2290% ( 37) 00:09:25.198 9023.803 - 9074.215: 5.5930% ( 41) 00:09:25.198 9074.215 - 9124.628: 6.0103% ( 47) 00:09:25.198 9124.628 - 9175.040: 6.4897% ( 54) 00:09:25.198 9175.040 - 9225.452: 6.8626% ( 42) 00:09:25.198 9225.452 - 9275.865: 7.4663% ( 68) 00:09:25.198 9275.865 - 9326.277: 8.1499% ( 77) 00:09:25.198 9326.277 - 9376.689: 8.9311% ( 88) 00:09:25.198 9376.689 - 9427.102: 9.6768% ( 84) 00:09:25.198 9427.102 - 9477.514: 10.3249% ( 73) 00:09:25.198 9477.514 - 9527.926: 11.2393% ( 103) 00:09:25.198 9527.926 - 9578.338: 12.2337% ( 112) 00:09:25.198 9578.338 - 9628.751: 13.7340% ( 169) 00:09:25.198 9628.751 - 9679.163: 14.8704% ( 128) 00:09:25.198 9679.163 - 9729.575: 15.8114% ( 106) 00:09:25.198 9729.575 - 9779.988: 16.7702% ( 108) 00:09:25.198 9779.988 - 9830.400: 17.7912% ( 115) 00:09:25.198 9830.400 - 9880.812: 19.0252% ( 139) 00:09:25.198 9880.812 - 9931.225: 20.4279% ( 158) 00:09:25.198 9931.225 - 9981.637: 21.9460% ( 171) 00:09:25.198 9981.637 - 10032.049: 23.6417% ( 191) 00:09:25.198 10032.049 - 10082.462: 25.0977% ( 164) 00:09:25.198 10082.462 - 10132.874: 26.6069% ( 170) 00:09:25.198 10132.874 - 10183.286: 28.1161% ( 170) 00:09:25.198 10183.286 - 10233.698: 29.5011% ( 156) 00:09:25.198 10233.698 - 10284.111: 30.8327% ( 150) 00:09:25.198 10284.111 - 10334.523: 32.3153% ( 167) 00:09:25.198 10334.523 - 10384.935: 33.7713% ( 164) 00:09:25.198 10384.935 - 10435.348: 35.2628% ( 168) 00:09:25.198 10435.348 - 10485.760: 36.6388% ( 155) 00:09:25.198 10485.760 - 10536.172: 38.4411% ( 203) 00:09:25.198 10536.172 - 10586.585: 39.8970% ( 164) 00:09:25.198 10586.585 - 10636.997: 41.3441% ( 163) 00:09:25.198 10636.997 - 10687.409: 42.5959% ( 141) 00:09:25.198 10687.409 - 10737.822: 43.9542% ( 153) 00:09:25.198 10737.822 - 10788.234: 45.1705% ( 137) 00:09:25.199 10788.234 - 10838.646: 46.4222% ( 141) 00:09:25.199 10838.646 - 10889.058: 47.6740% ( 141) 00:09:25.199 10889.058 - 10939.471: 48.9613% ( 145) 00:09:25.199 10939.471 - 10989.883: 50.3018% ( 151) 00:09:25.199 10989.883 - 11040.295: 51.5980% ( 146) 00:09:25.199 11040.295 - 11090.708: 52.6722% ( 121) 00:09:25.199 11090.708 - 11141.120: 53.7376% ( 120) 00:09:25.199 11141.120 - 11191.532: 54.7230% ( 111) 00:09:25.199 11191.532 - 11241.945: 55.9748% ( 141) 00:09:25.199 11241.945 - 11292.357: 57.1112% ( 128) 00:09:25.199 11292.357 - 11342.769: 58.1854% ( 121) 00:09:25.199 11342.769 - 11393.182: 59.1264% ( 106) 00:09:25.199 11393.182 - 11443.594: 60.1385% ( 114) 00:09:25.199 11443.594 - 11494.006: 60.9641% ( 93) 00:09:25.199 11494.006 - 11544.418: 61.7720% ( 91) 00:09:25.199 11544.418 - 11594.831: 62.5977% ( 93) 00:09:25.199 11594.831 - 11645.243: 63.3789% ( 88) 00:09:25.199 11645.243 - 11695.655: 64.4531% ( 121) 00:09:25.199 11695.655 - 11746.068: 65.3942% ( 106) 00:09:25.199 11746.068 - 11796.480: 66.2642% ( 98) 00:09:25.199 11796.480 - 11846.892: 67.1254% ( 97) 00:09:25.199 11846.892 - 11897.305: 67.8711% ( 84) 00:09:25.199 11897.305 - 11947.717: 68.7855% ( 103) 00:09:25.199 11947.717 - 11998.129: 69.4247% ( 72) 00:09:25.199 11998.129 - 12048.542: 70.0994% ( 76) 00:09:25.199 12048.542 - 12098.954: 70.7919% ( 78) 00:09:25.199 12098.954 - 12149.366: 71.4045% ( 69) 00:09:25.199 12149.366 - 12199.778: 72.1591% ( 85) 00:09:25.199 12199.778 - 12250.191: 72.8693% ( 80) 00:09:25.199 12250.191 - 12300.603: 73.6328% ( 86) 00:09:25.199 12300.603 - 12351.015: 74.2631% ( 71) 00:09:25.199 12351.015 - 12401.428: 75.0355% ( 87) 00:09:25.199 12401.428 - 12451.840: 75.9766% ( 106) 00:09:25.199 12451.840 - 12502.252: 77.1218% ( 129) 00:09:25.199 12502.252 - 12552.665: 77.8675% ( 84) 00:09:25.199 12552.665 - 12603.077: 78.4712% ( 68) 00:09:25.199 12603.077 - 12653.489: 79.2347% ( 86) 00:09:25.199 12653.489 - 12703.902: 79.9982% ( 86) 00:09:25.199 12703.902 - 12754.314: 80.6996% ( 79) 00:09:25.199 12754.314 - 12804.726: 81.3121% ( 69) 00:09:25.199 12804.726 - 12855.138: 81.9602% ( 73) 00:09:25.199 12855.138 - 12905.551: 82.6083% ( 73) 00:09:25.199 12905.551 - 13006.375: 83.5671% ( 108) 00:09:25.199 13006.375 - 13107.200: 84.5259% ( 108) 00:09:25.199 13107.200 - 13208.025: 85.5558% ( 116) 00:09:25.199 13208.025 - 13308.849: 86.3548% ( 90) 00:09:25.199 13308.849 - 13409.674: 87.2958% ( 106) 00:09:25.199 13409.674 - 13510.498: 88.2724% ( 110) 00:09:25.199 13510.498 - 13611.323: 89.1513% ( 99) 00:09:25.199 13611.323 - 13712.148: 89.9325% ( 88) 00:09:25.199 13712.148 - 13812.972: 90.6161% ( 77) 00:09:25.199 13812.972 - 13913.797: 91.1754% ( 63) 00:09:25.199 13913.797 - 14014.622: 91.6637% ( 55) 00:09:25.199 14014.622 - 14115.446: 92.2053% ( 61) 00:09:25.199 14115.446 - 14216.271: 92.6225% ( 47) 00:09:25.199 14216.271 - 14317.095: 92.9332% ( 35) 00:09:25.199 14317.095 - 14417.920: 93.3683% ( 49) 00:09:25.199 14417.920 - 14518.745: 93.7322% ( 41) 00:09:25.199 14518.745 - 14619.569: 94.0874% ( 40) 00:09:25.199 14619.569 - 14720.394: 94.3271% ( 27) 00:09:25.199 14720.394 - 14821.218: 94.5579% ( 26) 00:09:25.199 14821.218 - 14922.043: 94.8331% ( 31) 00:09:25.199 14922.043 - 15022.868: 95.1083% ( 31) 00:09:25.199 15022.868 - 15123.692: 95.3480% ( 27) 00:09:25.199 15123.692 - 15224.517: 95.6055% ( 29) 00:09:25.199 15224.517 - 15325.342: 95.8540% ( 28) 00:09:25.199 15325.342 - 15426.166: 96.1115% ( 29) 00:09:25.199 15426.166 - 15526.991: 96.3690% ( 29) 00:09:25.199 15526.991 - 15627.815: 96.5909% ( 25) 00:09:25.199 15627.815 - 15728.640: 96.7951% ( 23) 00:09:25.199 15728.640 - 15829.465: 96.9638% ( 19) 00:09:25.199 15829.465 - 15930.289: 97.1325% ( 19) 00:09:25.199 15930.289 - 16031.114: 97.3722% ( 27) 00:09:25.199 16031.114 - 16131.938: 97.5497% ( 20) 00:09:25.199 16131.938 - 16232.763: 97.7184% ( 19) 00:09:25.199 16232.763 - 16333.588: 97.8249% ( 12) 00:09:25.199 16333.588 - 16434.412: 97.9315% ( 12) 00:09:25.199 16434.412 - 16535.237: 98.0114% ( 9) 00:09:25.199 16535.237 - 16636.062: 98.1179% ( 12) 00:09:25.199 16636.062 - 16736.886: 98.2244% ( 12) 00:09:25.199 16736.886 - 16837.711: 98.4020% ( 20) 00:09:25.199 16837.711 - 16938.535: 98.4819% ( 9) 00:09:25.199 16938.535 - 17039.360: 98.5174% ( 4) 00:09:25.199 17039.360 - 17140.185: 98.5529% ( 4) 00:09:25.199 17140.185 - 17241.009: 98.5884% ( 4) 00:09:25.199 17241.009 - 17341.834: 98.6328% ( 5) 00:09:25.199 17341.834 - 17442.658: 98.6683% ( 4) 00:09:25.199 17442.658 - 17543.483: 98.7038% ( 4) 00:09:25.199 17543.483 - 17644.308: 98.7393% ( 4) 00:09:25.199 17644.308 - 17745.132: 98.7749% ( 4) 00:09:25.199 17745.132 - 17845.957: 98.8104% ( 4) 00:09:25.199 17845.957 - 17946.782: 98.8548% ( 5) 00:09:25.199 17946.782 - 18047.606: 98.8636% ( 1) 00:09:25.199 23088.837 - 23189.662: 98.8725% ( 1) 00:09:25.199 23189.662 - 23290.486: 98.9169% ( 5) 00:09:25.199 23290.486 - 23391.311: 98.9790% ( 7) 00:09:25.199 23391.311 - 23492.135: 99.0412% ( 7) 00:09:25.199 23492.135 - 23592.960: 99.0945% ( 6) 00:09:25.199 23592.960 - 23693.785: 99.1300% ( 4) 00:09:25.199 23693.785 - 23794.609: 99.1832% ( 6) 00:09:25.199 23794.609 - 23895.434: 99.2365% ( 6) 00:09:25.199 23895.434 - 23996.258: 99.2720% ( 4) 00:09:25.199 23996.258 - 24097.083: 99.5206% ( 28) 00:09:25.199 24097.083 - 24197.908: 99.7070% ( 21) 00:09:25.199 24197.908 - 24298.732: 99.7514% ( 5) 00:09:25.199 24298.732 - 24399.557: 99.7958% ( 5) 00:09:25.199 24399.557 - 24500.382: 99.8402% ( 5) 00:09:25.199 24500.382 - 24601.206: 99.8935% ( 6) 00:09:25.199 24601.206 - 24702.031: 99.9379% ( 5) 00:09:25.199 24702.031 - 24802.855: 99.9822% ( 5) 00:09:25.199 24802.855 - 24903.680: 100.0000% ( 2) 00:09:25.199 00:09:25.199 ************************************ 00:09:25.199 END TEST nvme_perf 00:09:25.199 ************************************ 00:09:25.199 14:55:48 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:25.199 00:09:25.199 real 0m2.496s 00:09:25.199 user 0m2.171s 00:09:25.199 sys 0m0.214s 00:09:25.199 14:55:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:25.199 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 14:55:48 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.199 14:55:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:25.199 14:55:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.199 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 ************************************ 00:09:25.199 START TEST nvme_hello_world 00:09:25.199 ************************************ 00:09:25.199 14:55:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.460 Initializing NVMe Controllers 00:09:25.460 Attached to 0000:00:09.0 00:09:25.460 Namespace ID: 1 size: 1GB 00:09:25.460 Attached to 0000:00:06.0 00:09:25.460 Namespace ID: 1 size: 6GB 00:09:25.460 Attached to 0000:00:07.0 00:09:25.460 Namespace ID: 1 size: 5GB 00:09:25.460 Attached to 0000:00:08.0 00:09:25.460 Namespace ID: 1 size: 4GB 00:09:25.460 Namespace ID: 2 size: 4GB 00:09:25.460 Namespace ID: 3 size: 4GB 00:09:25.460 Initialization complete. 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 INFO: using host memory buffer for IO 00:09:25.460 Hello world! 00:09:25.460 ************************************ 00:09:25.460 END TEST nvme_hello_world 00:09:25.460 ************************************ 00:09:25.460 00:09:25.460 real 0m0.211s 00:09:25.460 user 0m0.077s 00:09:25.460 sys 0m0.088s 00:09:25.460 14:55:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:25.460 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 14:55:49 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.460 14:55:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.460 14:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.460 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 ************************************ 00:09:25.460 START TEST nvme_sgl 00:09:25.460 ************************************ 00:09:25.460 14:55:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.721 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:25.721 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:25.721 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:25.721 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:25.721 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:25.980 NVMe Readv/Writev Request test 00:09:25.980 Attached to 0000:00:09.0 00:09:25.980 Attached to 0000:00:06.0 00:09:25.980 Attached to 0000:00:07.0 00:09:25.980 Attached to 0000:00:08.0 00:09:25.980 0000:00:06.0: build_io_request_2 test passed 00:09:25.980 0000:00:06.0: build_io_request_4 test passed 00:09:25.980 0000:00:06.0: build_io_request_5 test passed 00:09:25.980 0000:00:06.0: build_io_request_6 test passed 00:09:25.980 0000:00:06.0: build_io_request_7 test passed 00:09:25.980 0000:00:06.0: build_io_request_10 test passed 00:09:25.980 0000:00:07.0: build_io_request_2 test passed 00:09:25.980 0000:00:07.0: build_io_request_4 test passed 00:09:25.980 0000:00:07.0: build_io_request_5 test passed 00:09:25.980 0000:00:07.0: build_io_request_6 test passed 00:09:25.980 0000:00:07.0: build_io_request_7 test passed 00:09:25.980 0000:00:07.0: build_io_request_10 test passed 00:09:25.980 Cleaning up... 00:09:25.980 ************************************ 00:09:25.980 END TEST nvme_sgl 00:09:25.980 ************************************ 00:09:25.980 00:09:25.980 real 0m0.297s 00:09:25.980 user 0m0.150s 00:09:25.980 sys 0m0.100s 00:09:25.980 14:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:25.980 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 14:55:49 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:25.980 14:55:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.980 14:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.980 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 ************************************ 00:09:25.980 START TEST nvme_e2edp 00:09:25.980 ************************************ 00:09:25.980 14:55:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:26.240 NVMe Write/Read with End-to-End data protection test 00:09:26.240 Attached to 0000:00:09.0 00:09:26.240 Attached to 0000:00:06.0 00:09:26.240 Attached to 0000:00:07.0 00:09:26.240 Attached to 0000:00:08.0 00:09:26.240 Cleaning up... 00:09:26.240 ************************************ 00:09:26.240 END TEST nvme_e2edp 00:09:26.240 ************************************ 00:09:26.240 00:09:26.240 real 0m0.202s 00:09:26.240 user 0m0.052s 00:09:26.240 sys 0m0.108s 00:09:26.240 14:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.240 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:26.240 14:55:49 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:26.240 14:55:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.240 14:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.240 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:26.240 ************************************ 00:09:26.240 START TEST nvme_reserve 00:09:26.240 ************************************ 00:09:26.240 14:55:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:26.498 ===================================================== 00:09:26.498 NVMe Controller at PCI bus 0, device 9, function 0 00:09:26.498 ===================================================== 00:09:26.498 Reservations: Not Supported 00:09:26.498 ===================================================== 00:09:26.498 NVMe Controller at PCI bus 0, device 6, function 0 00:09:26.498 ===================================================== 00:09:26.498 Reservations: Not Supported 00:09:26.498 ===================================================== 00:09:26.498 NVMe Controller at PCI bus 0, device 7, function 0 00:09:26.498 ===================================================== 00:09:26.498 Reservations: Not Supported 00:09:26.498 ===================================================== 00:09:26.498 NVMe Controller at PCI bus 0, device 8, function 0 00:09:26.498 ===================================================== 00:09:26.498 Reservations: Not Supported 00:09:26.498 Reservation test passed 00:09:26.498 ************************************ 00:09:26.498 END TEST nvme_reserve 00:09:26.498 ************************************ 00:09:26.498 00:09:26.498 real 0m0.208s 00:09:26.498 user 0m0.058s 00:09:26.498 sys 0m0.102s 00:09:26.498 14:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.498 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:26.498 14:55:49 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.498 14:55:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.498 14:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.499 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:26.499 ************************************ 00:09:26.499 START TEST nvme_err_injection 00:09:26.499 ************************************ 00:09:26.499 14:55:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.758 NVMe Error Injection test 00:09:26.758 Attached to 0000:00:09.0 00:09:26.758 Attached to 0000:00:06.0 00:09:26.758 Attached to 0000:00:07.0 00:09:26.758 Attached to 0000:00:08.0 00:09:26.759 0000:00:09.0: get features failed as expected 00:09:26.759 0000:00:06.0: get features failed as expected 00:09:26.759 0000:00:07.0: get features failed as expected 00:09:26.759 0000:00:08.0: get features failed as expected 00:09:26.759 0000:00:09.0: get features successfully as expected 00:09:26.759 0000:00:06.0: get features successfully as expected 00:09:26.759 0000:00:07.0: get features successfully as expected 00:09:26.759 0000:00:08.0: get features successfully as expected 00:09:26.759 0000:00:08.0: read failed as expected 00:09:26.759 0000:00:09.0: read failed as expected 00:09:26.759 0000:00:06.0: read failed as expected 00:09:26.759 0000:00:07.0: read failed as expected 00:09:26.759 0000:00:06.0: read successfully as expected 00:09:26.759 0000:00:09.0: read successfully as expected 00:09:26.759 0000:00:08.0: read successfully as expected 00:09:26.759 0000:00:07.0: read successfully as expected 00:09:26.759 Cleaning up... 00:09:26.759 ************************************ 00:09:26.759 END TEST nvme_err_injection 00:09:26.759 ************************************ 00:09:26.759 00:09:26.759 real 0m0.243s 00:09:26.759 user 0m0.080s 00:09:26.759 sys 0m0.112s 00:09:26.759 14:55:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.759 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 14:55:50 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:26.759 14:55:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:26.759 14:55:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.759 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 ************************************ 00:09:26.759 START TEST nvme_overhead 00:09:26.759 ************************************ 00:09:26.759 14:55:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:28.143 Initializing NVMe Controllers 00:09:28.143 Attached to 0000:00:09.0 00:09:28.143 Attached to 0000:00:06.0 00:09:28.143 Attached to 0000:00:07.0 00:09:28.143 Attached to 0000:00:08.0 00:09:28.143 Initialization complete. Launching workers. 00:09:28.143 submit (in ns) avg, min, max = 14696.7, 9876.9, 108213.8 00:09:28.143 complete (in ns) avg, min, max = 8952.5, 5981.5, 221650.0 00:09:28.143 00:09:28.143 Submit histogram 00:09:28.143 ================ 00:09:28.143 Range in us Cumulative Count 00:09:28.143 9.846 - 9.895: 0.0600% ( 2) 00:09:28.143 9.945 - 9.994: 0.1200% ( 2) 00:09:28.143 10.043 - 10.092: 0.1501% ( 1) 00:09:28.143 10.092 - 10.142: 0.2101% ( 2) 00:09:28.144 10.142 - 10.191: 0.2701% ( 2) 00:09:28.144 10.191 - 10.240: 0.3001% ( 1) 00:09:28.144 10.240 - 10.289: 0.4202% ( 4) 00:09:28.144 10.289 - 10.338: 0.4802% ( 2) 00:09:28.144 10.338 - 10.388: 0.5702% ( 3) 00:09:28.144 10.388 - 10.437: 0.7203% ( 5) 00:09:28.144 10.437 - 10.486: 1.0504% ( 11) 00:09:28.144 10.486 - 10.535: 1.3505% ( 10) 00:09:28.144 10.535 - 10.585: 1.7407% ( 13) 00:09:28.144 10.585 - 10.634: 2.1008% ( 12) 00:09:28.144 10.634 - 10.683: 2.4910% ( 13) 00:09:28.144 10.683 - 10.732: 2.8511% ( 12) 00:09:28.144 10.732 - 10.782: 3.1813% ( 11) 00:09:28.144 10.782 - 10.831: 3.3914% ( 7) 00:09:28.144 10.831 - 10.880: 3.6315% ( 8) 00:09:28.144 10.880 - 10.929: 3.9316% ( 10) 00:09:28.144 10.929 - 10.978: 4.1417% ( 7) 00:09:28.144 10.978 - 11.028: 4.3818% ( 8) 00:09:28.144 11.028 - 11.077: 4.5018% ( 4) 00:09:28.144 11.077 - 11.126: 4.7719% ( 9) 00:09:28.144 11.126 - 11.175: 4.9220% ( 5) 00:09:28.144 11.175 - 11.225: 5.1020% ( 6) 00:09:28.144 11.225 - 11.274: 5.2521% ( 5) 00:09:28.144 11.274 - 11.323: 5.3421% ( 3) 00:09:28.144 11.323 - 11.372: 5.5222% ( 6) 00:09:28.144 11.372 - 11.422: 5.7323% ( 7) 00:09:28.144 11.422 - 11.471: 5.8523% ( 4) 00:09:28.144 11.471 - 11.520: 6.0024% ( 5) 00:09:28.144 11.520 - 11.569: 6.4226% ( 14) 00:09:28.144 11.569 - 11.618: 6.5726% ( 5) 00:09:28.144 11.618 - 11.668: 6.7827% ( 7) 00:09:28.144 11.668 - 11.717: 7.2329% ( 15) 00:09:28.144 11.717 - 11.766: 7.5030% ( 9) 00:09:28.144 11.766 - 11.815: 7.7731% ( 9) 00:09:28.144 11.815 - 11.865: 8.1933% ( 14) 00:09:28.144 11.865 - 11.914: 8.7035% ( 17) 00:09:28.144 11.914 - 11.963: 9.1537% ( 15) 00:09:28.144 11.963 - 12.012: 9.5138% ( 12) 00:09:28.144 12.012 - 12.062: 9.8139% ( 10) 00:09:28.144 12.062 - 12.111: 10.2341% ( 14) 00:09:28.144 12.111 - 12.160: 10.5042% ( 9) 00:09:28.144 12.160 - 12.209: 10.8043% ( 10) 00:09:28.144 12.209 - 12.258: 11.2545% ( 15) 00:09:28.144 12.258 - 12.308: 11.7347% ( 16) 00:09:28.144 12.308 - 12.357: 12.3349% ( 20) 00:09:28.144 12.357 - 12.406: 12.8451% ( 17) 00:09:28.144 12.406 - 12.455: 13.3253% ( 16) 00:09:28.144 12.455 - 12.505: 13.5654% ( 8) 00:09:28.144 12.505 - 12.554: 13.8956% ( 11) 00:09:28.144 12.554 - 12.603: 14.1657% ( 9) 00:09:28.144 12.603 - 12.702: 14.8559% ( 23) 00:09:28.144 12.702 - 12.800: 15.1861% ( 11) 00:09:28.144 12.800 - 12.898: 15.8764% ( 23) 00:09:28.144 12.898 - 12.997: 16.5666% ( 23) 00:09:28.144 12.997 - 13.095: 17.0168% ( 15) 00:09:28.144 13.095 - 13.194: 17.2269% ( 7) 00:09:28.144 13.194 - 13.292: 17.4670% ( 8) 00:09:28.144 13.292 - 13.391: 17.9772% ( 17) 00:09:28.144 13.391 - 13.489: 18.2773% ( 10) 00:09:28.144 13.489 - 13.588: 19.0876% ( 27) 00:09:28.144 13.588 - 13.686: 22.0888% ( 100) 00:09:28.144 13.686 - 13.785: 27.9112% ( 194) 00:09:28.144 13.785 - 13.883: 35.5942% ( 256) 00:09:28.144 13.883 - 13.982: 44.5978% ( 300) 00:09:28.144 13.982 - 14.080: 53.0012% ( 280) 00:09:28.144 14.080 - 14.178: 59.7839% ( 226) 00:09:28.144 14.178 - 14.277: 64.3758% ( 153) 00:09:28.144 14.277 - 14.375: 67.7071% ( 111) 00:09:28.144 14.375 - 14.474: 70.6182% ( 97) 00:09:28.144 14.474 - 14.572: 72.9592% ( 78) 00:09:28.144 14.572 - 14.671: 75.2401% ( 76) 00:09:28.144 14.671 - 14.769: 77.8211% ( 86) 00:09:28.144 14.769 - 14.868: 79.5018% ( 56) 00:09:28.144 14.868 - 14.966: 81.4826% ( 66) 00:09:28.144 14.966 - 15.065: 83.0132% ( 51) 00:09:28.144 15.065 - 15.163: 84.0936% ( 36) 00:09:28.144 15.163 - 15.262: 84.9940% ( 30) 00:09:28.144 15.262 - 15.360: 85.8643% ( 29) 00:09:28.144 15.360 - 15.458: 86.2245% ( 12) 00:09:28.144 15.458 - 15.557: 86.6447% ( 14) 00:09:28.144 15.557 - 15.655: 86.9448% ( 10) 00:09:28.144 15.655 - 15.754: 87.0648% ( 4) 00:09:28.144 15.754 - 15.852: 87.2749% ( 7) 00:09:28.144 15.852 - 15.951: 87.3649% ( 3) 00:09:28.144 15.951 - 16.049: 87.6050% ( 8) 00:09:28.144 16.049 - 16.148: 87.7551% ( 5) 00:09:28.144 16.148 - 16.246: 87.8151% ( 2) 00:09:28.144 16.246 - 16.345: 87.8451% ( 1) 00:09:28.144 16.345 - 16.443: 87.9352% ( 3) 00:09:28.144 16.443 - 16.542: 88.0552% ( 4) 00:09:28.144 16.542 - 16.640: 88.2353% ( 6) 00:09:28.144 16.640 - 16.738: 88.2953% ( 2) 00:09:28.144 16.738 - 16.837: 88.3854% ( 3) 00:09:28.144 16.837 - 16.935: 88.4754% ( 3) 00:09:28.144 16.935 - 17.034: 88.5654% ( 3) 00:09:28.144 17.034 - 17.132: 88.6855% ( 4) 00:09:28.144 17.132 - 17.231: 88.8355% ( 5) 00:09:28.144 17.231 - 17.329: 89.1357% ( 10) 00:09:28.144 17.329 - 17.428: 89.2257% ( 3) 00:09:28.144 17.428 - 17.526: 89.3157% ( 3) 00:09:28.144 17.526 - 17.625: 89.3758% ( 2) 00:09:28.144 17.625 - 17.723: 89.5858% ( 7) 00:09:28.144 17.723 - 17.822: 89.7359% ( 5) 00:09:28.144 17.822 - 17.920: 89.8259% ( 3) 00:09:28.144 17.920 - 18.018: 89.9460% ( 4) 00:09:28.144 18.018 - 18.117: 90.0960% ( 5) 00:09:28.144 18.117 - 18.215: 90.2461% ( 5) 00:09:28.144 18.215 - 18.314: 90.4262% ( 6) 00:09:28.144 18.314 - 18.412: 90.5162% ( 3) 00:09:28.144 18.412 - 18.511: 90.6363% ( 4) 00:09:28.144 18.511 - 18.609: 90.9064% ( 9) 00:09:28.144 18.609 - 18.708: 91.0864% ( 6) 00:09:28.144 18.708 - 18.806: 91.2365% ( 5) 00:09:28.144 18.806 - 18.905: 91.5066% ( 9) 00:09:28.144 18.905 - 19.003: 91.7467% ( 8) 00:09:28.144 19.003 - 19.102: 91.9568% ( 7) 00:09:28.144 19.102 - 19.200: 92.2269% ( 9) 00:09:28.144 19.200 - 19.298: 92.4670% ( 8) 00:09:28.144 19.298 - 19.397: 92.5570% ( 3) 00:09:28.144 19.397 - 19.495: 92.7071% ( 5) 00:09:28.144 19.495 - 19.594: 92.7671% ( 2) 00:09:28.144 19.594 - 19.692: 92.9772% ( 7) 00:09:28.144 19.692 - 19.791: 93.0972% ( 4) 00:09:28.144 19.791 - 19.889: 93.1573% ( 2) 00:09:28.144 19.889 - 19.988: 93.3974% ( 8) 00:09:28.144 19.988 - 20.086: 93.5774% ( 6) 00:09:28.144 20.086 - 20.185: 93.7875% ( 7) 00:09:28.144 20.185 - 20.283: 94.1477% ( 12) 00:09:28.144 20.283 - 20.382: 94.3277% ( 6) 00:09:28.144 20.382 - 20.480: 94.5678% ( 8) 00:09:28.144 20.480 - 20.578: 94.8980% ( 11) 00:09:28.144 20.578 - 20.677: 95.3181% ( 14) 00:09:28.144 20.677 - 20.775: 95.4982% ( 6) 00:09:28.144 20.775 - 20.874: 95.6783% ( 6) 00:09:28.144 20.874 - 20.972: 95.9184% ( 8) 00:09:28.144 20.972 - 21.071: 96.0384% ( 4) 00:09:28.144 21.071 - 21.169: 96.1885% ( 5) 00:09:28.144 21.169 - 21.268: 96.3685% ( 6) 00:09:28.144 21.268 - 21.366: 96.5186% ( 5) 00:09:28.144 21.366 - 21.465: 96.6086% ( 3) 00:09:28.144 21.465 - 21.563: 96.7887% ( 6) 00:09:28.144 21.563 - 21.662: 96.9388% ( 5) 00:09:28.144 21.760 - 21.858: 97.0888% ( 5) 00:09:28.144 21.858 - 21.957: 97.2389% ( 5) 00:09:28.144 21.957 - 22.055: 97.4190% ( 6) 00:09:28.144 22.055 - 22.154: 97.5390% ( 4) 00:09:28.144 22.154 - 22.252: 97.6891% ( 5) 00:09:28.144 22.351 - 22.449: 97.8091% ( 4) 00:09:28.144 22.449 - 22.548: 97.8391% ( 1) 00:09:28.144 22.646 - 22.745: 97.9292% ( 3) 00:09:28.144 22.843 - 22.942: 97.9892% ( 2) 00:09:28.144 23.040 - 23.138: 98.0792% ( 3) 00:09:28.144 23.138 - 23.237: 98.1092% ( 1) 00:09:28.144 23.237 - 23.335: 98.1693% ( 2) 00:09:28.144 23.434 - 23.532: 98.1993% ( 1) 00:09:28.144 23.631 - 23.729: 98.2293% ( 1) 00:09:28.144 23.729 - 23.828: 98.2593% ( 1) 00:09:28.144 23.828 - 23.926: 98.2893% ( 1) 00:09:28.144 24.025 - 24.123: 98.3493% ( 2) 00:09:28.144 24.517 - 24.615: 98.3794% ( 1) 00:09:28.144 24.714 - 24.812: 98.4394% ( 2) 00:09:28.144 25.009 - 25.108: 98.4694% ( 1) 00:09:28.144 25.206 - 25.403: 98.5294% ( 2) 00:09:28.144 25.600 - 25.797: 98.5594% ( 1) 00:09:28.144 25.994 - 26.191: 98.5894% ( 1) 00:09:28.144 26.191 - 26.388: 98.6194% ( 1) 00:09:28.144 26.388 - 26.585: 98.6495% ( 1) 00:09:28.144 26.585 - 26.782: 98.7395% ( 3) 00:09:28.144 26.782 - 26.978: 98.7695% ( 1) 00:09:28.144 26.978 - 27.175: 98.8295% ( 2) 00:09:28.144 27.372 - 27.569: 98.9796% ( 5) 00:09:28.144 27.569 - 27.766: 99.0096% ( 1) 00:09:28.144 27.766 - 27.963: 99.0396% ( 1) 00:09:28.144 27.963 - 28.160: 99.0696% ( 1) 00:09:28.144 28.160 - 28.357: 99.0996% ( 1) 00:09:28.144 28.357 - 28.554: 99.1297% ( 1) 00:09:28.144 28.554 - 28.751: 99.1897% ( 2) 00:09:28.144 28.751 - 28.948: 99.2797% ( 3) 00:09:28.144 28.948 - 29.145: 99.3397% ( 2) 00:09:28.144 29.538 - 29.735: 99.3697% ( 1) 00:09:28.144 29.735 - 29.932: 99.3998% ( 1) 00:09:28.144 29.932 - 30.129: 99.4298% ( 1) 00:09:28.144 30.326 - 30.523: 99.4598% ( 1) 00:09:28.145 30.720 - 30.917: 99.5198% ( 2) 00:09:28.145 31.114 - 31.311: 99.5498% ( 1) 00:09:28.145 31.902 - 32.098: 99.5798% ( 1) 00:09:28.145 32.098 - 32.295: 99.6098% ( 1) 00:09:28.145 33.280 - 33.477: 99.6399% ( 1) 00:09:28.145 35.446 - 35.643: 99.6699% ( 1) 00:09:28.145 35.643 - 35.840: 99.7299% ( 2) 00:09:28.145 36.037 - 36.234: 99.7599% ( 1) 00:09:28.145 36.431 - 36.628: 99.7899% ( 1) 00:09:28.145 39.582 - 39.778: 99.8199% ( 1) 00:09:28.145 41.945 - 42.142: 99.8499% ( 1) 00:09:28.145 63.803 - 64.197: 99.8800% ( 1) 00:09:28.145 68.135 - 68.529: 99.9100% ( 1) 00:09:28.145 70.105 - 70.498: 99.9400% ( 1) 00:09:28.145 101.612 - 102.400: 99.9700% ( 1) 00:09:28.145 107.914 - 108.702: 100.0000% ( 1) 00:09:28.145 00:09:28.145 Complete histogram 00:09:28.145 ================== 00:09:28.145 Range in us Cumulative Count 00:09:28.145 5.982 - 6.006: 0.1200% ( 4) 00:09:28.145 6.006 - 6.031: 0.2101% ( 3) 00:09:28.145 6.031 - 6.055: 0.3301% ( 4) 00:09:28.145 6.055 - 6.080: 0.6002% ( 9) 00:09:28.145 6.080 - 6.105: 0.9904% ( 13) 00:09:28.145 6.105 - 6.129: 1.4106% ( 14) 00:09:28.145 6.129 - 6.154: 1.7107% ( 10) 00:09:28.145 6.154 - 6.178: 1.9808% ( 9) 00:09:28.145 6.178 - 6.203: 2.5510% ( 19) 00:09:28.145 6.203 - 6.228: 3.0612% ( 17) 00:09:28.145 6.228 - 6.252: 3.4214% ( 12) 00:09:28.145 6.252 - 6.277: 3.8715% ( 15) 00:09:28.145 6.277 - 6.302: 4.2917% ( 14) 00:09:28.145 6.302 - 6.351: 5.2221% ( 31) 00:09:28.145 6.351 - 6.400: 6.1525% ( 31) 00:09:28.145 6.400 - 6.449: 7.1729% ( 34) 00:09:28.145 6.449 - 6.498: 8.1633% ( 33) 00:09:28.145 6.498 - 6.548: 9.3938% ( 41) 00:09:28.145 6.548 - 6.597: 10.5042% ( 37) 00:09:28.145 6.597 - 6.646: 11.8247% ( 44) 00:09:28.145 6.646 - 6.695: 12.9052% ( 36) 00:09:28.145 6.695 - 6.745: 13.7155% ( 27) 00:09:28.145 6.745 - 6.794: 14.7059% ( 33) 00:09:28.145 6.794 - 6.843: 15.5162% ( 27) 00:09:28.145 6.843 - 6.892: 16.1465% ( 21) 00:09:28.145 6.892 - 6.942: 16.9868% ( 28) 00:09:28.145 6.942 - 6.991: 17.4370% ( 15) 00:09:28.145 6.991 - 7.040: 17.7971% ( 12) 00:09:28.145 7.040 - 7.089: 18.0672% ( 9) 00:09:28.145 7.089 - 7.138: 18.1873% ( 4) 00:09:28.145 7.138 - 7.188: 18.3073% ( 4) 00:09:28.145 7.188 - 7.237: 18.6375% ( 11) 00:09:28.145 7.237 - 7.286: 18.6975% ( 2) 00:09:28.145 7.286 - 7.335: 18.7275% ( 1) 00:09:28.145 7.335 - 7.385: 18.7575% ( 1) 00:09:28.145 7.434 - 7.483: 18.8475% ( 3) 00:09:28.145 7.582 - 7.631: 18.8776% ( 1) 00:09:28.145 7.680 - 7.729: 18.9076% ( 1) 00:09:28.145 7.828 - 7.877: 18.9376% ( 1) 00:09:28.145 7.877 - 7.926: 18.9676% ( 1) 00:09:28.145 7.926 - 7.975: 18.9976% ( 1) 00:09:28.145 8.172 - 8.222: 19.0276% ( 1) 00:09:28.145 8.222 - 8.271: 19.4178% ( 13) 00:09:28.145 8.271 - 8.320: 21.6687% ( 75) 00:09:28.145 8.320 - 8.369: 27.5510% ( 196) 00:09:28.145 8.369 - 8.418: 34.4838% ( 231) 00:09:28.145 8.418 - 8.468: 42.7071% ( 274) 00:09:28.145 8.468 - 8.517: 50.8703% ( 272) 00:09:28.145 8.517 - 8.566: 58.6134% ( 258) 00:09:28.145 8.566 - 8.615: 64.1357% ( 184) 00:09:28.145 8.615 - 8.665: 68.3373% ( 140) 00:09:28.145 8.665 - 8.714: 71.2485% ( 97) 00:09:28.145 8.714 - 8.763: 73.9796% ( 91) 00:09:28.145 8.763 - 8.812: 75.7803% ( 60) 00:09:28.145 8.812 - 8.862: 77.1008% ( 44) 00:09:28.145 8.862 - 8.911: 78.0012% ( 30) 00:09:28.145 8.911 - 8.960: 78.9016% ( 30) 00:09:28.145 8.960 - 9.009: 79.4718% ( 19) 00:09:28.145 9.009 - 9.058: 80.1321% ( 22) 00:09:28.145 9.058 - 9.108: 80.5822% ( 15) 00:09:28.145 9.108 - 9.157: 80.9424% ( 12) 00:09:28.145 9.157 - 9.206: 81.3025% ( 12) 00:09:28.145 9.206 - 9.255: 81.3625% ( 2) 00:09:28.145 9.255 - 9.305: 81.6927% ( 11) 00:09:28.145 9.305 - 9.354: 81.9928% ( 10) 00:09:28.145 9.354 - 9.403: 82.1729% ( 6) 00:09:28.145 9.403 - 9.452: 82.2629% ( 3) 00:09:28.145 9.452 - 9.502: 82.5030% ( 8) 00:09:28.145 9.502 - 9.551: 82.6531% ( 5) 00:09:28.145 9.551 - 9.600: 82.8031% ( 5) 00:09:28.145 9.600 - 9.649: 83.1933% ( 13) 00:09:28.145 9.649 - 9.698: 83.8235% ( 21) 00:09:28.145 9.698 - 9.748: 84.5738% ( 25) 00:09:28.145 9.748 - 9.797: 85.4742% ( 30) 00:09:28.145 9.797 - 9.846: 86.2245% ( 25) 00:09:28.145 9.846 - 9.895: 87.1248% ( 30) 00:09:28.145 9.895 - 9.945: 88.0852% ( 32) 00:09:28.145 9.945 - 9.994: 89.1357% ( 35) 00:09:28.145 9.994 - 10.043: 89.9460% ( 27) 00:09:28.145 10.043 - 10.092: 90.7263% ( 26) 00:09:28.145 10.092 - 10.142: 91.4466% ( 24) 00:09:28.145 10.142 - 10.191: 91.9268% ( 16) 00:09:28.145 10.191 - 10.240: 92.4970% ( 19) 00:09:28.145 10.240 - 10.289: 92.7371% ( 8) 00:09:28.145 10.289 - 10.338: 93.0972% ( 12) 00:09:28.145 10.338 - 10.388: 93.3974% ( 10) 00:09:28.145 10.388 - 10.437: 93.5174% ( 4) 00:09:28.145 10.437 - 10.486: 93.6975% ( 6) 00:09:28.145 10.486 - 10.535: 93.8776% ( 6) 00:09:28.145 10.535 - 10.585: 93.9976% ( 4) 00:09:28.145 10.585 - 10.634: 94.0276% ( 1) 00:09:28.145 10.634 - 10.683: 94.1176% ( 3) 00:09:28.145 10.732 - 10.782: 94.1477% ( 1) 00:09:28.145 10.782 - 10.831: 94.1777% ( 1) 00:09:28.145 10.929 - 10.978: 94.2077% ( 1) 00:09:28.145 10.978 - 11.028: 94.2677% ( 2) 00:09:28.145 11.028 - 11.077: 94.2977% ( 1) 00:09:28.145 11.077 - 11.126: 94.3577% ( 2) 00:09:28.145 11.126 - 11.175: 94.3878% ( 1) 00:09:28.145 11.225 - 11.274: 94.4178% ( 1) 00:09:28.145 11.274 - 11.323: 94.4778% ( 2) 00:09:28.145 11.323 - 11.372: 94.5678% ( 3) 00:09:28.145 11.422 - 11.471: 94.6279% ( 2) 00:09:28.145 11.569 - 11.618: 94.6579% ( 1) 00:09:28.145 11.668 - 11.717: 94.6879% ( 1) 00:09:28.145 11.717 - 11.766: 94.7479% ( 2) 00:09:28.145 11.865 - 11.914: 94.8079% ( 2) 00:09:28.145 11.914 - 11.963: 94.8379% ( 1) 00:09:28.145 11.963 - 12.012: 94.9280% ( 3) 00:09:28.145 12.012 - 12.062: 95.1080% ( 6) 00:09:28.145 12.062 - 12.111: 95.3181% ( 7) 00:09:28.145 12.111 - 12.160: 95.5882% ( 9) 00:09:28.145 12.160 - 12.209: 95.9484% ( 12) 00:09:28.145 12.209 - 12.258: 96.1885% ( 8) 00:09:28.145 12.258 - 12.308: 96.2485% ( 2) 00:09:28.145 12.308 - 12.357: 96.4586% ( 7) 00:09:28.145 12.357 - 12.406: 96.5786% ( 4) 00:09:28.145 12.406 - 12.455: 96.6387% ( 2) 00:09:28.145 12.455 - 12.505: 96.7287% ( 3) 00:09:28.145 12.505 - 12.554: 96.8187% ( 3) 00:09:28.145 12.554 - 12.603: 96.8487% ( 1) 00:09:28.145 12.603 - 12.702: 96.9388% ( 3) 00:09:28.145 12.702 - 12.800: 97.0588% ( 4) 00:09:28.145 12.898 - 12.997: 97.1188% ( 2) 00:09:28.145 13.292 - 13.391: 97.1789% ( 2) 00:09:28.145 13.785 - 13.883: 97.2389% ( 2) 00:09:28.145 14.178 - 14.277: 97.2689% ( 1) 00:09:28.145 14.671 - 14.769: 97.2989% ( 1) 00:09:28.145 14.769 - 14.868: 97.3289% ( 1) 00:09:28.145 14.966 - 15.065: 97.3589% ( 1) 00:09:28.145 15.458 - 15.557: 97.3890% ( 1) 00:09:28.145 15.655 - 15.754: 97.4190% ( 1) 00:09:28.145 15.754 - 15.852: 97.4490% ( 1) 00:09:28.145 15.852 - 15.951: 97.5090% ( 2) 00:09:28.145 15.951 - 16.049: 97.5690% ( 2) 00:09:28.145 16.049 - 16.148: 97.6291% ( 2) 00:09:28.145 16.148 - 16.246: 97.6591% ( 1) 00:09:28.145 16.345 - 16.443: 97.7191% ( 2) 00:09:28.145 16.443 - 16.542: 97.7791% ( 2) 00:09:28.145 16.640 - 16.738: 97.8391% ( 2) 00:09:28.145 16.738 - 16.837: 98.0192% ( 6) 00:09:28.145 16.837 - 16.935: 98.1993% ( 6) 00:09:28.145 16.935 - 17.034: 98.4094% ( 7) 00:09:28.145 17.034 - 17.132: 98.4694% ( 2) 00:09:28.145 17.132 - 17.231: 98.5294% ( 2) 00:09:28.145 17.428 - 17.526: 98.6795% ( 5) 00:09:28.145 17.822 - 17.920: 98.7095% ( 1) 00:09:28.145 17.920 - 18.018: 98.7695% ( 2) 00:09:28.145 18.018 - 18.117: 98.7995% ( 1) 00:09:28.145 18.117 - 18.215: 98.8595% ( 2) 00:09:28.145 18.314 - 18.412: 98.8896% ( 1) 00:09:28.145 18.412 - 18.511: 98.9196% ( 1) 00:09:28.145 18.511 - 18.609: 99.0396% ( 4) 00:09:28.145 19.102 - 19.200: 99.0696% ( 1) 00:09:28.145 19.298 - 19.397: 99.1297% ( 2) 00:09:28.145 19.397 - 19.495: 99.1597% ( 1) 00:09:28.145 19.495 - 19.594: 99.1897% ( 1) 00:09:28.145 19.594 - 19.692: 99.2197% ( 1) 00:09:28.145 19.791 - 19.889: 99.2497% ( 1) 00:09:28.145 19.889 - 19.988: 99.2797% ( 1) 00:09:28.145 20.480 - 20.578: 99.3097% ( 1) 00:09:28.145 20.578 - 20.677: 99.3397% ( 1) 00:09:28.145 21.169 - 21.268: 99.3998% ( 2) 00:09:28.145 21.268 - 21.366: 99.4298% ( 1) 00:09:28.145 21.366 - 21.465: 99.4598% ( 1) 00:09:28.145 21.858 - 21.957: 99.4898% ( 1) 00:09:28.145 22.154 - 22.252: 99.5498% ( 2) 00:09:28.146 22.942 - 23.040: 99.5798% ( 1) 00:09:28.146 23.532 - 23.631: 99.6098% ( 1) 00:09:28.146 24.714 - 24.812: 99.6399% ( 1) 00:09:28.146 25.009 - 25.108: 99.6999% ( 2) 00:09:28.146 26.585 - 26.782: 99.7299% ( 1) 00:09:28.146 28.357 - 28.554: 99.7599% ( 1) 00:09:28.146 34.068 - 34.265: 99.7899% ( 1) 00:09:28.146 52.382 - 52.775: 99.8199% ( 1) 00:09:28.146 55.532 - 55.926: 99.8499% ( 1) 00:09:28.146 63.409 - 63.803: 99.8800% ( 1) 00:09:28.146 65.378 - 65.772: 99.9100% ( 1) 00:09:28.146 190.622 - 191.409: 99.9400% ( 1) 00:09:28.146 211.102 - 212.677: 99.9700% ( 1) 00:09:28.146 220.554 - 222.129: 100.0000% ( 1) 00:09:28.146 00:09:28.146 ************************************ 00:09:28.146 END TEST nvme_overhead 00:09:28.146 ************************************ 00:09:28.146 00:09:28.146 real 0m1.204s 00:09:28.146 user 0m1.057s 00:09:28.146 sys 0m0.091s 00:09:28.146 14:55:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.146 14:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.146 14:55:51 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:28.146 14:55:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:28.146 14:55:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.146 14:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.146 ************************************ 00:09:28.146 START TEST nvme_arbitration 00:09:28.146 ************************************ 00:09:28.146 14:55:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:31.440 Initializing NVMe Controllers 00:09:31.440 Attached to 0000:00:09.0 00:09:31.440 Attached to 0000:00:06.0 00:09:31.440 Attached to 0000:00:07.0 00:09:31.440 Attached to 0000:00:08.0 00:09:31.440 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:31.440 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:31.440 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:31.440 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:31.440 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:31.440 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:31.440 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:31.440 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:31.440 Initialization complete. Launching workers. 00:09:31.440 Starting thread on core 1 with urgent priority queue 00:09:31.440 Starting thread on core 2 with urgent priority queue 00:09:31.440 Starting thread on core 3 with urgent priority queue 00:09:31.440 Starting thread on core 0 with urgent priority queue 00:09:31.440 QEMU NVMe Ctrl (12343 ) core 0: 6080.00 IO/s 16.45 secs/100000 ios 00:09:31.440 QEMU NVMe Ctrl (12342 ) core 0: 6080.00 IO/s 16.45 secs/100000 ios 00:09:31.440 QEMU NVMe Ctrl (12340 ) core 1: 6037.33 IO/s 16.56 secs/100000 ios 00:09:31.440 QEMU NVMe Ctrl (12342 ) core 1: 6037.33 IO/s 16.56 secs/100000 ios 00:09:31.440 QEMU NVMe Ctrl (12341 ) core 2: 5354.67 IO/s 18.68 secs/100000 ios 00:09:31.440 QEMU NVMe Ctrl (12342 ) core 3: 5738.67 IO/s 17.43 secs/100000 ios 00:09:31.440 ======================================================== 00:09:31.440 00:09:31.440 ************************************ 00:09:31.440 END TEST nvme_arbitration 00:09:31.440 ************************************ 00:09:31.440 00:09:31.440 real 0m3.238s 00:09:31.440 user 0m9.065s 00:09:31.440 sys 0m0.117s 00:09:31.440 14:55:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.440 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 14:55:54 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:31.440 14:55:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:31.440 14:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.440 14:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 ************************************ 00:09:31.440 START TEST nvme_single_aen 00:09:31.440 ************************************ 00:09:31.440 14:55:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:31.440 [2024-11-18 14:55:54.873921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.440 [2024-11-18 14:55:54.874015] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.699 [2024-11-18 14:55:55.029980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:31.699 [2024-11-18 14:55:55.032643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:31.699 [2024-11-18 14:55:55.034532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:31.699 [2024-11-18 14:55:55.036711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:31.699 Asynchronous Event Request test 00:09:31.699 Attached to 0000:00:09.0 00:09:31.699 Attached to 0000:00:06.0 00:09:31.699 Attached to 0000:00:07.0 00:09:31.699 Attached to 0000:00:08.0 00:09:31.699 Reset controller to setup AER completions for this process 00:09:31.699 Registering asynchronous event callbacks... 00:09:31.699 Getting orig temperature thresholds of all controllers 00:09:31.699 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.699 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.699 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.699 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.699 Setting all controllers temperature threshold low to trigger AER 00:09:31.699 Waiting for all controllers temperature threshold to be set lower 00:09:31.699 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.699 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:31.699 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.699 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:31.699 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.699 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:31.699 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.699 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:31.699 Waiting for all controllers to trigger AER and reset threshold 00:09:31.699 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.699 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.699 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.699 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.699 Cleaning up... 00:09:31.699 ************************************ 00:09:31.699 END TEST nvme_single_aen 00:09:31.699 ************************************ 00:09:31.699 00:09:31.699 real 0m0.243s 00:09:31.699 user 0m0.068s 00:09:31.699 sys 0m0.114s 00:09:31.699 14:55:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.699 14:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:31.699 14:55:55 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:31.699 14:55:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.699 14:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.699 14:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:31.699 ************************************ 00:09:31.699 START TEST nvme_doorbell_aers 00:09:31.699 ************************************ 00:09:31.699 14:55:55 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:31.699 14:55:55 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:31.699 14:55:55 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:31.699 14:55:55 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.699 14:55:55 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:31.699 14:55:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:31.699 14:55:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:31.699 14:55:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.699 14:55:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.699 14:55:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:31.699 14:55:55 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:31.699 14:55:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:31.699 14:55:55 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:31.699 14:55:55 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:31.956 [2024-11-18 14:55:55.393130] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:09:41.942 Executing: test_write_invalid_db 00:09:41.942 Waiting for AER completion... 00:09:41.942 Failure: test_write_invalid_db 00:09:41.942 00:09:41.942 Executing: test_invalid_db_write_overflow_sq 00:09:41.942 Waiting for AER completion... 00:09:41.942 Failure: test_invalid_db_write_overflow_sq 00:09:41.942 00:09:41.942 Executing: test_invalid_db_write_overflow_cq 00:09:41.942 Waiting for AER completion... 00:09:41.942 Failure: test_invalid_db_write_overflow_cq 00:09:41.942 00:09:41.942 14:56:05 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:41.942 14:56:05 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:41.942 [2024-11-18 14:56:05.416912] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:09:51.898 Executing: test_write_invalid_db 00:09:51.898 Waiting for AER completion... 00:09:51.898 Failure: test_write_invalid_db 00:09:51.898 00:09:51.898 Executing: test_invalid_db_write_overflow_sq 00:09:51.898 Waiting for AER completion... 00:09:51.899 Failure: test_invalid_db_write_overflow_sq 00:09:51.899 00:09:51.899 Executing: test_invalid_db_write_overflow_cq 00:09:51.899 Waiting for AER completion... 00:09:51.899 Failure: test_invalid_db_write_overflow_cq 00:09:51.899 00:09:51.899 14:56:15 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.899 14:56:15 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:51.899 [2024-11-18 14:56:15.430514] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:01.867 Executing: test_write_invalid_db 00:10:01.867 Waiting for AER completion... 00:10:01.867 Failure: test_write_invalid_db 00:10:01.867 00:10:01.867 Executing: test_invalid_db_write_overflow_sq 00:10:01.867 Waiting for AER completion... 00:10:01.867 Failure: test_invalid_db_write_overflow_sq 00:10:01.867 00:10:01.867 Executing: test_invalid_db_write_overflow_cq 00:10:01.867 Waiting for AER completion... 00:10:01.867 Failure: test_invalid_db_write_overflow_cq 00:10:01.867 00:10:01.867 14:56:25 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.867 14:56:25 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:02.126 [2024-11-18 14:56:25.470561] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 Executing: test_write_invalid_db 00:10:12.099 Waiting for AER completion... 00:10:12.099 Failure: test_write_invalid_db 00:10:12.099 00:10:12.099 Executing: test_invalid_db_write_overflow_sq 00:10:12.099 Waiting for AER completion... 00:10:12.099 Failure: test_invalid_db_write_overflow_sq 00:10:12.099 00:10:12.099 Executing: test_invalid_db_write_overflow_cq 00:10:12.099 Waiting for AER completion... 00:10:12.099 Failure: test_invalid_db_write_overflow_cq 00:10:12.099 00:10:12.099 ************************************ 00:10:12.099 END TEST nvme_doorbell_aers 00:10:12.099 ************************************ 00:10:12.099 00:10:12.099 real 0m40.182s 00:10:12.099 user 0m34.206s 00:10:12.099 sys 0m5.586s 00:10:12.099 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.099 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.099 14:56:35 -- nvme/nvme.sh@97 -- # uname 00:10:12.099 14:56:35 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:12.099 14:56:35 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:12.099 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:12.099 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.099 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.099 ************************************ 00:10:12.099 START TEST nvme_multi_aen 00:10:12.099 ************************************ 00:10:12.099 14:56:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:12.099 [2024-11-18 14:56:35.392338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.099 [2024-11-18 14:56:35.392414] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.099 [2024-11-18 14:56:35.502584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:12.099 [2024-11-18 14:56:35.502626] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.503058] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.503115] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.505665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:12.099 [2024-11-18 14:56:35.505746] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.506015] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.506173] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.508557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:12.099 [2024-11-18 14:56:35.508617] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.508802] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.508927] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.511373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:12.099 [2024-11-18 14:56:35.511424] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.511582] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 [2024-11-18 14:56:35.511724] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75537) is not found. Dropping the request. 00:10:12.099 Child process pid: 76052 00:10:12.099 [2024-11-18 14:56:35.522540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.099 [2024-11-18 14:56:35.522662] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.358 [Child] Asynchronous Event Request test 00:10:12.358 [Child] Attached to 0000:00:09.0 00:10:12.358 [Child] Attached to 0000:00:06.0 00:10:12.358 [Child] Attached to 0000:00:07.0 00:10:12.358 [Child] Attached to 0000:00:08.0 00:10:12.358 [Child] Registering asynchronous event callbacks... 00:10:12.358 [Child] Getting orig temperature thresholds of all controllers 00:10:12.358 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:12.358 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 [Child] Cleaning up... 00:10:12.358 Asynchronous Event Request test 00:10:12.358 Attached to 0000:00:09.0 00:10:12.358 Attached to 0000:00:06.0 00:10:12.358 Attached to 0000:00:07.0 00:10:12.358 Attached to 0000:00:08.0 00:10:12.358 Reset controller to setup AER completions for this process 00:10:12.358 Registering asynchronous event callbacks... 00:10:12.358 Getting orig temperature thresholds of all controllers 00:10:12.358 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.358 Setting all controllers temperature threshold low to trigger AER 00:10:12.358 Waiting for all controllers temperature threshold to be set lower 00:10:12.358 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:12.358 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:12.358 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:12.358 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.358 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:12.358 Waiting for all controllers to trigger AER and reset threshold 00:10:12.358 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.358 Cleaning up... 00:10:12.358 ************************************ 00:10:12.358 END TEST nvme_multi_aen 00:10:12.358 ************************************ 00:10:12.358 00:10:12.358 real 0m0.372s 00:10:12.358 user 0m0.115s 00:10:12.358 sys 0m0.164s 00:10:12.358 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.358 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.358 14:56:35 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:12.358 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:12.358 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.358 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.358 ************************************ 00:10:12.358 START TEST nvme_startup 00:10:12.358 ************************************ 00:10:12.358 14:56:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:12.616 Initializing NVMe Controllers 00:10:12.616 Attached to 0000:00:09.0 00:10:12.616 Attached to 0000:00:06.0 00:10:12.616 Attached to 0000:00:07.0 00:10:12.616 Attached to 0000:00:08.0 00:10:12.616 Initialization complete. 00:10:12.616 Time used:120131.102 (us). 00:10:12.616 ************************************ 00:10:12.616 END TEST nvme_startup 00:10:12.616 ************************************ 00:10:12.616 00:10:12.616 real 0m0.184s 00:10:12.616 user 0m0.059s 00:10:12.616 sys 0m0.077s 00:10:12.616 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.616 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.616 14:56:35 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:12.616 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.616 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.616 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.616 ************************************ 00:10:12.616 START TEST nvme_multi_secondary 00:10:12.616 ************************************ 00:10:12.616 14:56:35 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:12.616 14:56:35 -- nvme/nvme.sh@52 -- # pid0=76108 00:10:12.616 14:56:35 -- nvme/nvme.sh@54 -- # pid1=76109 00:10:12.617 14:56:35 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:12.617 14:56:35 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:12.617 14:56:35 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:15.899 Initializing NVMe Controllers 00:10:15.899 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:15.899 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:15.899 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:15.899 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:15.899 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:15.899 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:15.899 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:15.899 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:15.899 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:15.899 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:15.899 Initialization complete. Launching workers. 00:10:15.899 ======================================================== 00:10:15.899 Latency(us) 00:10:15.899 Device Information : IOPS MiB/s Average min max 00:10:15.899 PCIE (0000:00:09.0) NSID 1 from core 1: 7422.38 28.99 2155.17 1062.31 5716.63 00:10:15.899 PCIE (0000:00:06.0) NSID 1 from core 1: 7422.38 28.99 2154.28 976.16 5638.35 00:10:15.899 PCIE (0000:00:07.0) NSID 1 from core 1: 7422.38 28.99 2155.11 1049.07 6520.26 00:10:15.899 PCIE (0000:00:08.0) NSID 1 from core 1: 7422.38 28.99 2155.05 1069.57 7050.36 00:10:15.899 PCIE (0000:00:08.0) NSID 2 from core 1: 7422.38 28.99 2154.96 951.98 6356.53 00:10:15.900 PCIE (0000:00:08.0) NSID 3 from core 1: 7422.38 28.99 2154.87 1031.13 5980.17 00:10:15.900 ======================================================== 00:10:15.900 Total : 44534.26 173.96 2154.90 951.98 7050.36 00:10:15.900 00:10:15.900 Initializing NVMe Controllers 00:10:15.900 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:15.900 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:15.900 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:15.900 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:15.900 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:15.900 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:15.900 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:15.900 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:15.900 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:15.900 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:15.900 Initialization complete. Launching workers. 00:10:15.900 ======================================================== 00:10:15.900 Latency(us) 00:10:15.900 Device Information : IOPS MiB/s Average min max 00:10:15.900 PCIE (0000:00:09.0) NSID 1 from core 2: 3041.27 11.88 5260.11 1063.51 12936.55 00:10:15.900 PCIE (0000:00:06.0) NSID 1 from core 2: 3041.27 11.88 5258.94 1177.69 13350.40 00:10:15.900 PCIE (0000:00:07.0) NSID 1 from core 2: 3041.27 11.88 5260.68 1201.88 12338.32 00:10:15.900 PCIE (0000:00:08.0) NSID 1 from core 2: 3041.27 11.88 5260.11 1222.75 12224.19 00:10:15.900 PCIE (0000:00:08.0) NSID 2 from core 2: 3041.27 11.88 5261.42 1135.70 13113.88 00:10:15.900 PCIE (0000:00:08.0) NSID 3 from core 2: 3041.27 11.88 5261.35 1238.14 13096.45 00:10:15.900 ======================================================== 00:10:15.900 Total : 18247.61 71.28 5260.43 1063.51 13350.40 00:10:15.900 00:10:15.900 14:56:39 -- nvme/nvme.sh@56 -- # wait 76108 00:10:17.802 Initializing NVMe Controllers 00:10:17.802 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:17.802 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:17.802 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:17.802 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:17.802 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:17.802 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:17.802 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:17.802 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:17.802 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:17.802 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:17.802 Initialization complete. Launching workers. 00:10:17.802 ======================================================== 00:10:17.802 Latency(us) 00:10:17.802 Device Information : IOPS MiB/s Average min max 00:10:17.802 PCIE (0000:00:09.0) NSID 1 from core 0: 10850.50 42.38 1474.22 688.97 5767.36 00:10:17.802 PCIE (0000:00:06.0) NSID 1 from core 0: 10850.50 42.38 1473.39 681.69 5607.06 00:10:17.802 PCIE (0000:00:07.0) NSID 1 from core 0: 10850.50 42.38 1474.18 699.83 5595.33 00:10:17.802 PCIE (0000:00:08.0) NSID 1 from core 0: 10850.50 42.38 1474.16 570.34 5629.06 00:10:17.802 PCIE (0000:00:08.0) NSID 2 from core 0: 10850.50 42.38 1474.13 498.46 5458.78 00:10:17.802 PCIE (0000:00:08.0) NSID 3 from core 0: 10850.50 42.38 1474.11 429.48 5693.59 00:10:17.802 ======================================================== 00:10:17.802 Total : 65103.02 254.31 1474.03 429.48 5767.36 00:10:17.802 00:10:17.802 14:56:41 -- nvme/nvme.sh@57 -- # wait 76109 00:10:17.802 14:56:41 -- nvme/nvme.sh@61 -- # pid0=76177 00:10:17.802 14:56:41 -- nvme/nvme.sh@63 -- # pid1=76178 00:10:17.802 14:56:41 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:17.802 14:56:41 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:17.802 14:56:41 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:21.108 Initializing NVMe Controllers 00:10:21.108 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:21.108 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:21.108 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:21.108 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:21.108 Initialization complete. Launching workers. 00:10:21.108 ======================================================== 00:10:21.108 Latency(us) 00:10:21.108 Device Information : IOPS MiB/s Average min max 00:10:21.108 PCIE (0000:00:09.0) NSID 1 from core 1: 8001.49 31.26 1999.24 732.74 6067.75 00:10:21.108 PCIE (0000:00:06.0) NSID 1 from core 1: 8001.49 31.26 1998.34 715.22 5794.61 00:10:21.108 PCIE (0000:00:07.0) NSID 1 from core 1: 8001.49 31.26 1999.21 724.73 5833.57 00:10:21.108 PCIE (0000:00:08.0) NSID 1 from core 1: 8001.49 31.26 1999.23 725.46 5485.53 00:10:21.108 PCIE (0000:00:08.0) NSID 2 from core 1: 8001.49 31.26 1999.34 732.36 5917.33 00:10:21.108 PCIE (0000:00:08.0) NSID 3 from core 1: 8001.49 31.26 1999.36 745.50 5745.68 00:10:21.108 ======================================================== 00:10:21.108 Total : 48008.92 187.53 1999.12 715.22 6067.75 00:10:21.108 00:10:21.108 Initializing NVMe Controllers 00:10:21.108 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:21.108 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:21.108 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:21.108 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:21.108 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:21.108 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:21.108 Initialization complete. Launching workers. 00:10:21.108 ======================================================== 00:10:21.108 Latency(us) 00:10:21.108 Device Information : IOPS MiB/s Average min max 00:10:21.108 PCIE (0000:00:09.0) NSID 1 from core 0: 7822.88 30.56 2044.83 737.98 5512.78 00:10:21.108 PCIE (0000:00:06.0) NSID 1 from core 0: 7822.88 30.56 2043.90 724.64 5685.10 00:10:21.108 PCIE (0000:00:07.0) NSID 1 from core 0: 7822.88 30.56 2044.75 731.61 5503.35 00:10:21.108 PCIE (0000:00:08.0) NSID 1 from core 0: 7822.88 30.56 2044.70 737.06 5497.30 00:10:21.108 PCIE (0000:00:08.0) NSID 2 from core 0: 7822.88 30.56 2044.65 745.51 4926.54 00:10:21.108 PCIE (0000:00:08.0) NSID 3 from core 0: 7822.88 30.56 2044.57 741.87 4865.08 00:10:21.108 ======================================================== 00:10:21.108 Total : 46937.26 183.35 2044.57 724.64 5685.10 00:10:21.108 00:10:23.010 Initializing NVMe Controllers 00:10:23.010 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:23.010 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:23.010 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:23.010 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:23.010 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:23.010 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:23.010 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:23.010 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:23.010 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:23.010 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:23.010 Initialization complete. Launching workers. 00:10:23.010 ======================================================== 00:10:23.010 Latency(us) 00:10:23.010 Device Information : IOPS MiB/s Average min max 00:10:23.010 PCIE (0000:00:09.0) NSID 1 from core 2: 4559.66 17.81 3508.52 762.23 14052.94 00:10:23.010 PCIE (0000:00:06.0) NSID 1 from core 2: 4559.66 17.81 3507.32 755.72 13650.62 00:10:23.010 PCIE (0000:00:07.0) NSID 1 from core 2: 4559.66 17.81 3508.24 700.22 13658.26 00:10:23.010 PCIE (0000:00:08.0) NSID 1 from core 2: 4559.66 17.81 3508.35 621.96 13642.27 00:10:23.010 PCIE (0000:00:08.0) NSID 2 from core 2: 4559.66 17.81 3507.94 525.76 12941.12 00:10:23.010 PCIE (0000:00:08.0) NSID 3 from core 2: 4559.66 17.81 3505.75 436.25 13686.48 00:10:23.010 ======================================================== 00:10:23.010 Total : 27357.96 106.87 3507.69 436.25 14052.94 00:10:23.010 00:10:23.010 ************************************ 00:10:23.010 END TEST nvme_multi_secondary 00:10:23.010 ************************************ 00:10:23.010 14:56:46 -- nvme/nvme.sh@65 -- # wait 76177 00:10:23.010 14:56:46 -- nvme/nvme.sh@66 -- # wait 76178 00:10:23.010 00:10:23.010 real 0m10.544s 00:10:23.010 user 0m18.312s 00:10:23.010 sys 0m0.609s 00:10:23.010 14:56:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.010 14:56:46 -- common/autotest_common.sh@10 -- # set +x 00:10:23.010 14:56:46 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:23.010 14:56:46 -- nvme/nvme.sh@102 -- # kill_stub 00:10:23.010 14:56:46 -- common/autotest_common.sh@1075 -- # [[ -e /proc/75120 ]] 00:10:23.010 14:56:46 -- common/autotest_common.sh@1076 -- # kill 75120 00:10:23.010 14:56:46 -- common/autotest_common.sh@1077 -- # wait 75120 00:10:23.944 [2024-11-18 14:56:47.521973] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:23.944 [2024-11-18 14:56:47.522114] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:23.944 [2024-11-18 14:56:47.522150] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:23.944 [2024-11-18 14:56:47.522183] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:24.511 [2024-11-18 14:56:48.036355] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:24.511 [2024-11-18 14:56:48.036475] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:24.511 [2024-11-18 14:56:48.036509] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:24.511 [2024-11-18 14:56:48.036541] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:25.447 [2024-11-18 14:56:49.032452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:25.447 [2024-11-18 14:56:49.032576] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:25.447 [2024-11-18 14:56:49.032611] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:25.447 [2024-11-18 14:56:49.032649] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:27.355 [2024-11-18 14:56:50.549443] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:27.355 [2024-11-18 14:56:50.549557] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:27.355 [2024-11-18 14:56:50.549592] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:27.355 [2024-11-18 14:56:50.549625] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76051) is not found. Dropping the request. 00:10:27.355 14:56:50 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:27.355 14:56:50 -- common/autotest_common.sh@1083 -- # echo 2 00:10:27.355 14:56:50 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:27.355 14:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.355 14:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:27.355 ************************************ 00:10:27.355 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:27.355 ************************************ 00:10:27.355 14:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:27.355 * Looking for test storage... 00:10:27.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:27.355 14:56:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:27.355 14:56:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:27.355 14:56:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:27.355 14:56:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:27.355 14:56:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:27.355 14:56:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:27.355 14:56:50 -- scripts/common.sh@335 -- # IFS=.-: 00:10:27.355 14:56:50 -- scripts/common.sh@335 -- # read -ra ver1 00:10:27.355 14:56:50 -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.355 14:56:50 -- scripts/common.sh@336 -- # read -ra ver2 00:10:27.355 14:56:50 -- scripts/common.sh@337 -- # local 'op=<' 00:10:27.355 14:56:50 -- scripts/common.sh@339 -- # ver1_l=2 00:10:27.355 14:56:50 -- scripts/common.sh@340 -- # ver2_l=1 00:10:27.355 14:56:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:27.355 14:56:50 -- scripts/common.sh@343 -- # case "$op" in 00:10:27.355 14:56:50 -- scripts/common.sh@344 -- # : 1 00:10:27.355 14:56:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:27.355 14:56:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.355 14:56:50 -- scripts/common.sh@364 -- # decimal 1 00:10:27.355 14:56:50 -- scripts/common.sh@352 -- # local d=1 00:10:27.355 14:56:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.355 14:56:50 -- scripts/common.sh@354 -- # echo 1 00:10:27.355 14:56:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:27.355 14:56:50 -- scripts/common.sh@365 -- # decimal 2 00:10:27.355 14:56:50 -- scripts/common.sh@352 -- # local d=2 00:10:27.355 14:56:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.355 14:56:50 -- scripts/common.sh@354 -- # echo 2 00:10:27.355 14:56:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:27.355 14:56:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:27.355 14:56:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:27.355 14:56:50 -- scripts/common.sh@367 -- # return 0 00:10:27.355 14:56:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:27.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.355 --rc genhtml_branch_coverage=1 00:10:27.355 --rc genhtml_function_coverage=1 00:10:27.355 --rc genhtml_legend=1 00:10:27.355 --rc geninfo_all_blocks=1 00:10:27.355 --rc geninfo_unexecuted_blocks=1 00:10:27.355 00:10:27.355 ' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:27.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.355 --rc genhtml_branch_coverage=1 00:10:27.355 --rc genhtml_function_coverage=1 00:10:27.355 --rc genhtml_legend=1 00:10:27.355 --rc geninfo_all_blocks=1 00:10:27.355 --rc geninfo_unexecuted_blocks=1 00:10:27.355 00:10:27.355 ' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:27.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.355 --rc genhtml_branch_coverage=1 00:10:27.355 --rc genhtml_function_coverage=1 00:10:27.355 --rc genhtml_legend=1 00:10:27.355 --rc geninfo_all_blocks=1 00:10:27.355 --rc geninfo_unexecuted_blocks=1 00:10:27.355 00:10:27.355 ' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:27.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.355 --rc genhtml_branch_coverage=1 00:10:27.355 --rc genhtml_function_coverage=1 00:10:27.355 --rc genhtml_legend=1 00:10:27.355 --rc geninfo_all_blocks=1 00:10:27.355 --rc geninfo_unexecuted_blocks=1 00:10:27.355 00:10:27.355 ' 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:27.355 14:56:50 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:27.355 14:56:50 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:27.355 14:56:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:27.355 14:56:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:27.355 14:56:50 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:27.355 14:56:50 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:27.355 14:56:50 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:27.355 14:56:50 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:27.355 14:56:50 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:27.355 14:56:50 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:27.355 14:56:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:27.355 14:56:50 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76375 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:27.355 14:56:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76375 00:10:27.355 14:56:50 -- common/autotest_common.sh@829 -- # '[' -z 76375 ']' 00:10:27.355 14:56:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.355 14:56:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.355 14:56:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.355 14:56:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.355 14:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:27.355 [2024-11-18 14:56:50.911185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.356 [2024-11-18 14:56:50.911305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76375 ] 00:10:27.615 [2024-11-18 14:56:51.068757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.615 [2024-11-18 14:56:51.111646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:27.615 [2024-11-18 14:56:51.112139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.615 [2024-11-18 14:56:51.112400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.615 [2024-11-18 14:56:51.112527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.615 [2024-11-18 14:56:51.112560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.182 14:56:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.182 14:56:51 -- common/autotest_common.sh@862 -- # return 0 00:10:28.182 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:28.182 14:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.182 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.440 nvme0n1 00:10:28.440 14:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_6V79y.txt 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:28.440 14:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.440 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.440 true 00:10:28.440 14:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731941811 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76392 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:28.440 14:56:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:30.342 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:30.342 14:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.342 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:10:30.342 [2024-11-18 14:56:53.815457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:30.342 [2024-11-18 14:56:53.815697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:30.343 [2024-11-18 14:56:53.815718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:30.343 [2024-11-18 14:56:53.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:30.343 [2024-11-18 14:56:53.817350] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:30.343 14:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.343 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76392 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76392 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76392 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:30.343 14:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.343 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:10:30.343 14:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_6V79y.txt 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_6V79y.txt 00:10:30.343 14:56:53 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76375 00:10:30.343 14:56:53 -- common/autotest_common.sh@936 -- # '[' -z 76375 ']' 00:10:30.343 14:56:53 -- common/autotest_common.sh@940 -- # kill -0 76375 00:10:30.343 14:56:53 -- common/autotest_common.sh@941 -- # uname 00:10:30.343 14:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:30.343 14:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76375 00:10:30.343 killing process with pid 76375 00:10:30.343 14:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:30.343 14:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:30.343 14:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76375' 00:10:30.343 14:56:53 -- common/autotest_common.sh@955 -- # kill 76375 00:10:30.343 14:56:53 -- common/autotest_common.sh@960 -- # wait 76375 00:10:30.910 14:56:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:30.910 14:56:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:30.910 00:10:30.910 real 0m3.578s 00:10:30.910 user 0m12.627s 00:10:30.910 sys 0m0.536s 00:10:30.910 14:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.910 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:10:30.910 ************************************ 00:10:30.910 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:30.910 ************************************ 00:10:30.910 14:56:54 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:30.910 14:56:54 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:30.910 14:56:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:30.910 14:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.910 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:10:30.910 ************************************ 00:10:30.910 START TEST nvme_fio 00:10:30.910 ************************************ 00:10:30.910 14:56:54 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:30.910 14:56:54 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:30.910 14:56:54 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:30.910 14:56:54 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:30.910 14:56:54 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:30.910 14:56:54 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:30.910 14:56:54 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:30.910 14:56:54 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:30.910 14:56:54 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:30.910 14:56:54 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:30.910 14:56:54 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:30.910 14:56:54 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:30.910 14:56:54 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:30.910 14:56:54 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:30.910 14:56:54 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:30.910 14:56:54 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:31.168 14:56:54 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:31.168 14:56:54 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:31.168 14:56:54 -- nvme/nvme.sh@41 -- # bs=4096 00:10:31.168 14:56:54 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:31.168 14:56:54 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:31.168 14:56:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:31.168 14:56:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:31.168 14:56:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:31.168 14:56:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.168 14:56:54 -- common/autotest_common.sh@1330 -- # shift 00:10:31.168 14:56:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:31.168 14:56:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:31.169 14:56:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.169 14:56:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:31.169 14:56:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:31.169 14:56:54 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:31.169 14:56:54 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:31.169 14:56:54 -- common/autotest_common.sh@1336 -- # break 00:10:31.169 14:56:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:31.169 14:56:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:31.427 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:31.427 fio-3.35 00:10:31.427 Starting 1 thread 00:10:37.992 00:10:37.992 test: (groupid=0, jobs=1): err= 0: pid=76523: Mon Nov 18 14:57:00 2024 00:10:37.992 read: IOPS=23.1k, BW=90.3MiB/s (94.6MB/s)(181MiB/2001msec) 00:10:37.992 slat (usec): min=4, max=201, avg= 5.04, stdev= 2.44 00:10:37.992 clat (usec): min=577, max=7671, avg=2767.74, stdev=789.70 00:10:37.992 lat (usec): min=588, max=7728, avg=2772.78, stdev=791.14 00:10:37.992 clat percentiles (usec): 00:10:37.992 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:37.992 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:37.992 | 70.00th=[ 2573], 80.00th=[ 2737], 90.00th=[ 3687], 95.00th=[ 4817], 00:10:37.992 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7308], 00:10:37.992 | 99.99th=[ 7504] 00:10:37.992 bw ( KiB/s): min=87752, max=96848, per=100.00%, avg=92824.00, stdev=4637.68, samples=3 00:10:37.992 iops : min=21938, max=24212, avg=23206.00, stdev=1159.42, samples=3 00:10:37.992 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:10:37.992 slat (nsec): min=4273, max=62101, avg=5391.33, stdev=2277.26 00:10:37.992 clat (usec): min=695, max=7555, avg=2770.06, stdev=790.53 00:10:37.992 lat (usec): min=707, max=7567, avg=2775.45, stdev=791.94 00:10:37.992 clat percentiles (usec): 00:10:37.992 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:37.992 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:37.992 | 70.00th=[ 2573], 80.00th=[ 2737], 90.00th=[ 3654], 95.00th=[ 4817], 00:10:37.992 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6980], 99.95th=[ 7242], 00:10:37.992 | 99.99th=[ 7373] 00:10:37.992 bw ( KiB/s): min=88912, max=96192, per=100.00%, avg=92925.33, stdev=3696.99, samples=3 00:10:37.992 iops : min=22228, max=24048, avg=23231.33, stdev=924.25, samples=3 00:10:37.992 lat (usec) : 750=0.01%, 1000=0.01% 00:10:37.992 lat (msec) : 2=0.74%, 4=91.02%, 10=8.22% 00:10:37.992 cpu : usr=99.30%, sys=0.00%, ctx=5, majf=0, minf=627 00:10:37.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:37.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.992 issued rwts: total=46238,45968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.992 00:10:37.992 Run status group 0 (all jobs): 00:10:37.992 READ: bw=90.3MiB/s (94.6MB/s), 90.3MiB/s-90.3MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:10:37.992 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:10:37.992 ----------------------------------------------------- 00:10:37.992 Suppressions used: 00:10:37.992 count bytes template 00:10:37.992 1 32 /usr/src/fio/parse.c 00:10:37.992 1 8 libtcmalloc_minimal.so 00:10:37.992 ----------------------------------------------------- 00:10:37.992 00:10:37.992 14:57:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:37.992 14:57:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:37.992 14:57:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:37.992 14:57:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:37.992 14:57:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:37.992 14:57:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:37.992 14:57:01 -- nvme/nvme.sh@41 -- # bs=4096 00:10:37.992 14:57:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:37.992 14:57:01 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:37.992 14:57:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:37.992 14:57:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:37.992 14:57:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:37.992 14:57:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.992 14:57:01 -- common/autotest_common.sh@1330 -- # shift 00:10:37.992 14:57:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:37.992 14:57:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:37.992 14:57:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.992 14:57:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:37.992 14:57:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:37.992 14:57:01 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:37.992 14:57:01 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:37.992 14:57:01 -- common/autotest_common.sh@1336 -- # break 00:10:37.992 14:57:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:37.992 14:57:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:38.253 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:38.253 fio-3.35 00:10:38.253 Starting 1 thread 00:10:44.878 00:10:44.878 test: (groupid=0, jobs=1): err= 0: pid=76601: Mon Nov 18 14:57:07 2024 00:10:44.878 read: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(169MiB/2001msec) 00:10:44.878 slat (nsec): min=4201, max=81692, avg=5322.89, stdev=2035.12 00:10:44.878 clat (usec): min=245, max=11092, avg=2950.19, stdev=911.36 00:10:44.878 lat (usec): min=251, max=11097, avg=2955.52, stdev=912.34 00:10:44.878 clat percentiles (usec): 00:10:44.878 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:10:44.878 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:10:44.878 | 70.00th=[ 2868], 80.00th=[ 3064], 90.00th=[ 3916], 95.00th=[ 5080], 00:10:44.878 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9634], 00:10:44.878 | 99.99th=[10814] 00:10:44.878 bw ( KiB/s): min=78744, max=91672, per=100.00%, avg=87357.33, stdev=7459.37, samples=3 00:10:44.878 iops : min=19686, max=22918, avg=21839.33, stdev=1864.84, samples=3 00:10:44.878 write: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:10:44.878 slat (nsec): min=4354, max=59223, avg=5674.82, stdev=2115.04 00:10:44.878 clat (usec): min=227, max=11221, avg=2966.97, stdev=918.43 00:10:44.878 lat (usec): min=232, max=11236, avg=2972.65, stdev=919.46 00:10:44.878 clat percentiles (usec): 00:10:44.878 | 1.00th=[ 2089], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2474], 00:10:44.878 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:10:44.878 | 70.00th=[ 2900], 80.00th=[ 3097], 90.00th=[ 3949], 95.00th=[ 5080], 00:10:44.878 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9634], 00:10:44.878 | 99.99th=[10552] 00:10:44.878 bw ( KiB/s): min=80584, max=91328, per=100.00%, avg=87522.67, stdev=6018.45, samples=3 00:10:44.878 iops : min=20146, max=22832, avg=21880.67, stdev=1504.61, samples=3 00:10:44.878 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:44.878 lat (msec) : 2=0.48%, 4=89.84%, 10=9.61%, 20=0.02% 00:10:44.878 cpu : usr=99.25%, sys=0.00%, ctx=4, majf=0, minf=627 00:10:44.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:44.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.878 issued rwts: total=43299,42971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.878 00:10:44.878 Run status group 0 (all jobs): 00:10:44.878 READ: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=169MiB (177MB), run=2001-2001msec 00:10:44.878 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:44.878 ----------------------------------------------------- 00:10:44.878 Suppressions used: 00:10:44.878 count bytes template 00:10:44.878 1 32 /usr/src/fio/parse.c 00:10:44.878 1 8 libtcmalloc_minimal.so 00:10:44.878 ----------------------------------------------------- 00:10:44.878 00:10:44.878 14:57:07 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:44.878 14:57:07 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:44.878 14:57:07 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:44.878 14:57:07 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:44.878 14:57:07 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:44.878 14:57:07 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:44.878 14:57:08 -- nvme/nvme.sh@41 -- # bs=4096 00:10:44.878 14:57:08 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:44.878 14:57:08 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:44.878 14:57:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:44.878 14:57:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:44.878 14:57:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:44.878 14:57:08 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:44.878 14:57:08 -- common/autotest_common.sh@1330 -- # shift 00:10:44.878 14:57:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:44.878 14:57:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:44.878 14:57:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:44.878 14:57:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:44.878 14:57:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:44.878 14:57:08 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:44.878 14:57:08 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:44.878 14:57:08 -- common/autotest_common.sh@1336 -- # break 00:10:44.878 14:57:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:44.878 14:57:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:44.878 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:44.878 fio-3.35 00:10:44.878 Starting 1 thread 00:10:51.436 00:10:51.436 test: (groupid=0, jobs=1): err= 0: pid=76667: Mon Nov 18 14:57:14 2024 00:10:51.436 read: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(174MiB/2001msec) 00:10:51.436 slat (nsec): min=4744, max=60703, avg=5706.86, stdev=2425.23 00:10:51.436 clat (usec): min=243, max=8452, avg=2879.55, stdev=849.72 00:10:51.436 lat (usec): min=248, max=8465, avg=2885.26, stdev=851.26 00:10:51.436 clat percentiles (usec): 00:10:51.436 | 1.00th=[ 2376], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2507], 00:10:51.436 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:10:51.436 | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3621], 95.00th=[ 5145], 00:10:51.436 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 7242], 99.95th=[ 7898], 00:10:51.436 | 99.99th=[ 8356] 00:10:51.436 bw ( KiB/s): min=86056, max=90792, per=99.75%, avg=88594.67, stdev=2386.38, samples=3 00:10:51.436 iops : min=21514, max=22698, avg=22148.67, stdev=596.59, samples=3 00:10:51.436 write: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec); 0 zone resets 00:10:51.436 slat (nsec): min=4859, max=81534, avg=6099.60, stdev=2490.86 00:10:51.436 clat (usec): min=205, max=8411, avg=2884.55, stdev=851.85 00:10:51.436 lat (usec): min=210, max=8425, avg=2890.65, stdev=853.41 00:10:51.436 clat percentiles (usec): 00:10:51.436 | 1.00th=[ 2376], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2507], 00:10:51.436 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:10:51.436 | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3621], 95.00th=[ 5145], 00:10:51.436 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7963], 00:10:51.436 | 99.99th=[ 8291] 00:10:51.436 bw ( KiB/s): min=85720, max=90600, per=100.00%, avg=88784.00, stdev=2668.66, samples=3 00:10:51.436 iops : min=21430, max=22650, avg=22196.00, stdev=667.16, samples=3 00:10:51.436 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:51.436 lat (msec) : 2=0.09%, 4=91.78%, 10=8.08% 00:10:51.436 cpu : usr=99.35%, sys=0.00%, ctx=3, majf=0, minf=628 00:10:51.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:51.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.436 issued rwts: total=44431,44110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.436 00:10:51.436 Run status group 0 (all jobs): 00:10:51.436 READ: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=174MiB (182MB), run=2001-2001msec 00:10:51.436 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:10:51.436 ----------------------------------------------------- 00:10:51.436 Suppressions used: 00:10:51.436 count bytes template 00:10:51.436 1 32 /usr/src/fio/parse.c 00:10:51.436 1 8 libtcmalloc_minimal.so 00:10:51.436 ----------------------------------------------------- 00:10:51.436 00:10:51.436 14:57:14 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:51.436 14:57:14 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:51.436 14:57:14 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:51.436 14:57:14 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:51.698 14:57:15 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:51.698 14:57:15 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:51.959 14:57:15 -- nvme/nvme.sh@41 -- # bs=4096 00:10:51.959 14:57:15 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:51.959 14:57:15 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:51.959 14:57:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:51.959 14:57:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:51.959 14:57:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:51.959 14:57:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:51.959 14:57:15 -- common/autotest_common.sh@1330 -- # shift 00:10:51.959 14:57:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:51.959 14:57:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:51.959 14:57:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:51.959 14:57:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:51.959 14:57:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:51.959 14:57:15 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:51.959 14:57:15 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:51.959 14:57:15 -- common/autotest_common.sh@1336 -- # break 00:10:51.959 14:57:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:51.959 14:57:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:51.959 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:51.959 fio-3.35 00:10:51.959 Starting 1 thread 00:10:57.246 00:10:57.246 test: (groupid=0, jobs=1): err= 0: pid=76727: Mon Nov 18 14:57:20 2024 00:10:57.246 read: IOPS=21.1k, BW=82.6MiB/s (86.6MB/s)(165MiB/2001msec) 00:10:57.246 slat (usec): min=4, max=142, avg= 5.65, stdev= 2.27 00:10:57.246 clat (usec): min=387, max=10879, avg=3021.33, stdev=903.53 00:10:57.246 lat (usec): min=392, max=10939, avg=3026.97, stdev=904.63 00:10:57.246 clat percentiles (usec): 00:10:57.246 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2540], 00:10:57.246 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:10:57.246 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 4047], 95.00th=[ 5211], 00:10:57.246 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8291], 99.95th=[ 9896], 00:10:57.246 | 99.99th=[10683] 00:10:57.246 bw ( KiB/s): min=81840, max=88288, per=100.00%, avg=84869.33, stdev=3241.58, samples=3 00:10:57.246 iops : min=20460, max=22072, avg=21217.33, stdev=810.40, samples=3 00:10:57.246 write: IOPS=21.0k, BW=82.1MiB/s (86.1MB/s)(164MiB/2001msec); 0 zone resets 00:10:57.246 slat (nsec): min=4345, max=70802, avg=5992.65, stdev=2241.25 00:10:57.246 clat (usec): min=377, max=10778, avg=3030.05, stdev=898.28 00:10:57.246 lat (usec): min=383, max=10819, avg=3036.05, stdev=899.34 00:10:57.246 clat percentiles (usec): 00:10:57.246 | 1.00th=[ 2212], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:10:57.246 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:10:57.246 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 4080], 95.00th=[ 5145], 00:10:57.246 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[ 9896], 00:10:57.246 | 99.99th=[10683] 00:10:57.246 bw ( KiB/s): min=82296, max=87712, per=100.00%, avg=84989.33, stdev=2708.12, samples=3 00:10:57.246 iops : min=20574, max=21928, avg=21247.33, stdev=677.03, samples=3 00:10:57.246 lat (usec) : 500=0.01%, 750=0.01% 00:10:57.246 lat (msec) : 2=0.19%, 4=89.42%, 10=10.33%, 20=0.04% 00:10:57.246 cpu : usr=99.15%, sys=0.05%, ctx=5, majf=0, minf=625 00:10:57.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:57.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.246 issued rwts: total=42305,42045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.246 00:10:57.246 Run status group 0 (all jobs): 00:10:57.246 READ: bw=82.6MiB/s (86.6MB/s), 82.6MiB/s-82.6MiB/s (86.6MB/s-86.6MB/s), io=165MiB (173MB), run=2001-2001msec 00:10:57.246 WRITE: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=164MiB (172MB), run=2001-2001msec 00:10:57.246 ----------------------------------------------------- 00:10:57.246 Suppressions used: 00:10:57.246 count bytes template 00:10:57.246 1 32 /usr/src/fio/parse.c 00:10:57.246 1 8 libtcmalloc_minimal.so 00:10:57.246 ----------------------------------------------------- 00:10:57.246 00:10:57.508 14:57:20 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:57.508 14:57:20 -- nvme/nvme.sh@46 -- # true 00:10:57.508 00:10:57.508 real 0m26.564s 00:10:57.508 user 0m17.804s 00:10:57.508 sys 0m14.486s 00:10:57.508 ************************************ 00:10:57.508 END TEST nvme_fio 00:10:57.508 ************************************ 00:10:57.508 14:57:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:57.508 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:10:57.508 ************************************ 00:10:57.508 END TEST nvme 00:10:57.508 ************************************ 00:10:57.508 00:10:57.508 real 1m39.459s 00:10:57.508 user 3m33.923s 00:10:57.508 sys 0m25.792s 00:10:57.508 14:57:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:57.508 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:10:57.508 14:57:20 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:57.508 14:57:20 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:57.508 14:57:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.508 14:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.508 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:10:57.508 ************************************ 00:10:57.508 START TEST nvme_scc 00:10:57.508 ************************************ 00:10:57.508 14:57:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:57.508 * Looking for test storage... 00:10:57.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:57.508 14:57:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:57.508 14:57:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:57.508 14:57:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:57.508 14:57:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:57.508 14:57:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:57.508 14:57:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:57.508 14:57:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:57.508 14:57:21 -- scripts/common.sh@335 -- # IFS=.-: 00:10:57.508 14:57:21 -- scripts/common.sh@335 -- # read -ra ver1 00:10:57.508 14:57:21 -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.508 14:57:21 -- scripts/common.sh@336 -- # read -ra ver2 00:10:57.508 14:57:21 -- scripts/common.sh@337 -- # local 'op=<' 00:10:57.508 14:57:21 -- scripts/common.sh@339 -- # ver1_l=2 00:10:57.508 14:57:21 -- scripts/common.sh@340 -- # ver2_l=1 00:10:57.508 14:57:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:57.508 14:57:21 -- scripts/common.sh@343 -- # case "$op" in 00:10:57.508 14:57:21 -- scripts/common.sh@344 -- # : 1 00:10:57.508 14:57:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:57.508 14:57:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.508 14:57:21 -- scripts/common.sh@364 -- # decimal 1 00:10:57.508 14:57:21 -- scripts/common.sh@352 -- # local d=1 00:10:57.508 14:57:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.508 14:57:21 -- scripts/common.sh@354 -- # echo 1 00:10:57.508 14:57:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:57.508 14:57:21 -- scripts/common.sh@365 -- # decimal 2 00:10:57.508 14:57:21 -- scripts/common.sh@352 -- # local d=2 00:10:57.508 14:57:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.508 14:57:21 -- scripts/common.sh@354 -- # echo 2 00:10:57.508 14:57:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:57.508 14:57:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:57.508 14:57:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:57.508 14:57:21 -- scripts/common.sh@367 -- # return 0 00:10:57.508 14:57:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.508 14:57:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:57.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.508 --rc genhtml_branch_coverage=1 00:10:57.508 --rc genhtml_function_coverage=1 00:10:57.508 --rc genhtml_legend=1 00:10:57.508 --rc geninfo_all_blocks=1 00:10:57.508 --rc geninfo_unexecuted_blocks=1 00:10:57.508 00:10:57.508 ' 00:10:57.508 14:57:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:57.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.508 --rc genhtml_branch_coverage=1 00:10:57.508 --rc genhtml_function_coverage=1 00:10:57.508 --rc genhtml_legend=1 00:10:57.509 --rc geninfo_all_blocks=1 00:10:57.509 --rc geninfo_unexecuted_blocks=1 00:10:57.509 00:10:57.509 ' 00:10:57.509 14:57:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.509 --rc genhtml_branch_coverage=1 00:10:57.509 --rc genhtml_function_coverage=1 00:10:57.509 --rc genhtml_legend=1 00:10:57.509 --rc geninfo_all_blocks=1 00:10:57.509 --rc geninfo_unexecuted_blocks=1 00:10:57.509 00:10:57.509 ' 00:10:57.509 14:57:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:57.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.509 --rc genhtml_branch_coverage=1 00:10:57.509 --rc genhtml_function_coverage=1 00:10:57.509 --rc genhtml_legend=1 00:10:57.509 --rc geninfo_all_blocks=1 00:10:57.509 --rc geninfo_unexecuted_blocks=1 00:10:57.509 00:10:57.509 ' 00:10:57.509 14:57:21 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.509 14:57:21 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.509 14:57:21 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:57.509 14:57:21 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:57.509 14:57:21 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.509 14:57:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.509 14:57:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.509 14:57:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.509 14:57:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.509 14:57:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.509 14:57:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.509 14:57:21 -- paths/export.sh@5 -- # export PATH 00:10:57.509 14:57:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.509 14:57:21 -- nvme/functions.sh@10 -- # ctrls=() 00:10:57.509 14:57:21 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:57.509 14:57:21 -- nvme/functions.sh@11 -- # nvmes=() 00:10:57.509 14:57:21 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:57.509 14:57:21 -- nvme/functions.sh@12 -- # bdfs=() 00:10:57.509 14:57:21 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:57.509 14:57:21 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:57.509 14:57:21 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:57.509 14:57:21 -- nvme/functions.sh@14 -- # nvme_name= 00:10:57.509 14:57:21 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.509 14:57:21 -- nvme/nvme_scc.sh@12 -- # uname 00:10:57.509 14:57:21 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:57.509 14:57:21 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:57.509 14:57:21 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:58.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.081 Waiting for block devices as requested 00:10:58.081 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.081 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.343 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.343 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.654 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:03.654 14:57:26 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:03.654 14:57:26 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:03.654 14:57:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.654 14:57:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:03.654 14:57:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:03.654 14:57:26 -- scripts/common.sh@15 -- # local i 00:11:03.654 14:57:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:03.654 14:57:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.654 14:57:26 -- scripts/common.sh@24 -- # return 0 00:11:03.654 14:57:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:03.654 14:57:26 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:03.654 14:57:26 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@18 -- # shift 00:11:03.654 14:57:26 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.654 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.654 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:03.654 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.655 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:03.655 14:57:26 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.655 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:03.656 14:57:26 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:03.656 14:57:26 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:03.656 14:57:26 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:03.656 14:57:26 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:03.656 14:57:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.656 14:57:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:03.656 14:57:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:03.656 14:57:26 -- scripts/common.sh@15 -- # local i 00:11:03.656 14:57:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:03.656 14:57:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.656 14:57:26 -- scripts/common.sh@24 -- # return 0 00:11:03.656 14:57:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:03.656 14:57:26 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:03.656 14:57:26 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@18 -- # shift 00:11:03.656 14:57:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.656 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:03.656 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:03.656 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:03.657 14:57:26 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.657 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.657 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:03.658 14:57:26 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.658 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.658 14:57:26 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:03.658 14:57:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.659 14:57:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:03.659 14:57:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:03.659 14:57:26 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@18 -- # shift 00:11:03.659 14:57:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.659 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.659 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.659 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:03.660 14:57:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.660 14:57:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:03.660 14:57:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:03.660 14:57:26 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@18 -- # shift 00:11:03.660 14:57:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.660 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.660 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.660 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:03.661 14:57:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.661 14:57:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:03.661 14:57:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:03.661 14:57:26 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@18 -- # shift 00:11:03.661 14:57:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:03.661 14:57:26 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:26 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.661 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.661 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.661 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:03.662 14:57:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:03.662 14:57:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:03.662 14:57:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:03.662 14:57:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:03.662 14:57:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.662 14:57:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:03.662 14:57:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:03.662 14:57:27 -- scripts/common.sh@15 -- # local i 00:11:03.662 14:57:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:03.662 14:57:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.662 14:57:27 -- scripts/common.sh@24 -- # return 0 00:11:03.662 14:57:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:03.662 14:57:27 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:03.662 14:57:27 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@18 -- # shift 00:11:03.662 14:57:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:03.662 14:57:27 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.662 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.662 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.663 14:57:27 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.663 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.663 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:03.664 14:57:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.664 14:57:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:03.664 14:57:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:03.664 14:57:27 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@18 -- # shift 00:11:03.664 14:57:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:03.664 14:57:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.664 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.664 14:57:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:03.664 14:57:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:03.665 14:57:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:03.665 14:57:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:03.665 14:57:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:03.665 14:57:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.665 14:57:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:03.665 14:57:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:03.665 14:57:27 -- scripts/common.sh@15 -- # local i 00:11:03.665 14:57:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:03.665 14:57:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.665 14:57:27 -- scripts/common.sh@24 -- # return 0 00:11:03.665 14:57:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:03.665 14:57:27 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:03.665 14:57:27 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@18 -- # shift 00:11:03.665 14:57:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.665 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:03.665 14:57:27 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.665 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:03.666 14:57:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.666 14:57:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:03.666 14:57:27 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:03.666 14:57:27 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@18 -- # shift 00:11:03.666 14:57:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.666 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:03.666 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:03.666 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.667 14:57:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:03.667 14:57:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.667 14:57:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:03.667 14:57:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:03.667 14:57:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:03.667 14:57:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:03.667 14:57:27 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:03.667 14:57:27 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:03.667 14:57:27 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:03.667 14:57:27 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:03.667 14:57:27 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:03.667 14:57:27 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:03.667 14:57:27 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:03.667 14:57:27 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.667 14:57:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:03.667 14:57:27 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:03.667 14:57:27 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:03.667 14:57:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:03.667 14:57:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:03.667 14:57:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:03.667 14:57:27 -- nvme/functions.sh@197 -- # echo nvme1 00:11:03.667 14:57:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.667 14:57:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:03.667 14:57:27 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:03.667 14:57:27 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:03.667 14:57:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:03.667 14:57:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:03.667 14:57:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:03.667 14:57:27 -- nvme/functions.sh@197 -- # echo nvme0 00:11:03.667 14:57:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.667 14:57:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:03.667 14:57:27 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:03.667 14:57:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:03.667 14:57:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:03.667 14:57:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:03.668 14:57:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:03.668 14:57:27 -- nvme/functions.sh@197 -- # echo nvme3 00:11:03.668 14:57:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:03.668 14:57:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:03.668 14:57:27 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:03.668 14:57:27 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:03.668 14:57:27 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:03.668 14:57:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:03.668 14:57:27 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:03.668 14:57:27 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:03.668 14:57:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:03.668 14:57:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:03.668 14:57:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:03.668 14:57:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:03.668 14:57:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:03.668 14:57:27 -- nvme/functions.sh@197 -- # echo nvme2 00:11:03.668 14:57:27 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:03.668 14:57:27 -- nvme/functions.sh@206 -- # echo nvme1 00:11:03.668 14:57:27 -- nvme/functions.sh@207 -- # return 0 00:11:03.668 14:57:27 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:03.668 14:57:27 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:03.668 14:57:27 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.609 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.609 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.609 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.609 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.869 14:57:28 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:04.869 14:57:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:04.869 14:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.869 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:04.869 ************************************ 00:11:04.869 START TEST nvme_simple_copy 00:11:04.869 ************************************ 00:11:04.869 14:57:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:04.869 Initializing NVMe Controllers 00:11:04.869 Attaching to 0000:00:08.0 00:11:04.869 Controller supports SCC. Attached to 0000:00:08.0 00:11:04.869 Namespace ID: 1 size: 4GB 00:11:04.869 Initialization complete. 00:11:04.869 00:11:04.869 Controller QEMU NVMe Ctrl (12342 ) 00:11:04.869 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:04.869 Namespace Block Size:4096 00:11:04.869 Writing LBAs 0 to 63 with Random Data 00:11:04.869 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:04.869 LBAs matching Written Data: 64 00:11:04.869 00:11:04.869 real 0m0.234s 00:11:04.869 user 0m0.073s 00:11:04.869 sys 0m0.060s 00:11:04.869 14:57:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.869 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:04.869 ************************************ 00:11:04.869 END TEST nvme_simple_copy 00:11:04.869 ************************************ 00:11:05.131 00:11:05.131 real 0m7.569s 00:11:05.131 user 0m1.017s 00:11:05.131 sys 0m1.433s 00:11:05.131 14:57:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.131 ************************************ 00:11:05.131 END TEST nvme_scc 00:11:05.131 ************************************ 00:11:05.131 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:05.131 14:57:28 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:05.131 14:57:28 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:05.131 14:57:28 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:05.131 14:57:28 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:05.131 14:57:28 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:05.131 14:57:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.131 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:11:05.131 ************************************ 00:11:05.131 START TEST nvme_fdp 00:11:05.131 ************************************ 00:11:05.131 14:57:28 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:05.131 * Looking for test storage... 00:11:05.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.131 14:57:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.131 14:57:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.131 14:57:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.131 14:57:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.131 14:57:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.131 14:57:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.131 14:57:28 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.131 14:57:28 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.131 14:57:28 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.131 14:57:28 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.131 14:57:28 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.131 14:57:28 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.131 14:57:28 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.131 14:57:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.131 14:57:28 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.131 14:57:28 -- scripts/common.sh@344 -- # : 1 00:11:05.131 14:57:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.131 14:57:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.131 14:57:28 -- scripts/common.sh@364 -- # decimal 1 00:11:05.131 14:57:28 -- scripts/common.sh@352 -- # local d=1 00:11:05.131 14:57:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.131 14:57:28 -- scripts/common.sh@354 -- # echo 1 00:11:05.131 14:57:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.131 14:57:28 -- scripts/common.sh@365 -- # decimal 2 00:11:05.131 14:57:28 -- scripts/common.sh@352 -- # local d=2 00:11:05.131 14:57:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.131 14:57:28 -- scripts/common.sh@354 -- # echo 2 00:11:05.131 14:57:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.131 14:57:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.131 14:57:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.131 14:57:28 -- scripts/common.sh@367 -- # return 0 00:11:05.131 14:57:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.131 --rc genhtml_branch_coverage=1 00:11:05.131 --rc genhtml_function_coverage=1 00:11:05.131 --rc genhtml_legend=1 00:11:05.131 --rc geninfo_all_blocks=1 00:11:05.131 --rc geninfo_unexecuted_blocks=1 00:11:05.131 00:11:05.131 ' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.131 --rc genhtml_branch_coverage=1 00:11:05.131 --rc genhtml_function_coverage=1 00:11:05.131 --rc genhtml_legend=1 00:11:05.131 --rc geninfo_all_blocks=1 00:11:05.131 --rc geninfo_unexecuted_blocks=1 00:11:05.131 00:11:05.131 ' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.131 --rc genhtml_branch_coverage=1 00:11:05.131 --rc genhtml_function_coverage=1 00:11:05.131 --rc genhtml_legend=1 00:11:05.131 --rc geninfo_all_blocks=1 00:11:05.131 --rc geninfo_unexecuted_blocks=1 00:11:05.131 00:11:05.131 ' 00:11:05.131 14:57:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.131 --rc genhtml_branch_coverage=1 00:11:05.131 --rc genhtml_function_coverage=1 00:11:05.131 --rc genhtml_legend=1 00:11:05.131 --rc geninfo_all_blocks=1 00:11:05.131 --rc geninfo_unexecuted_blocks=1 00:11:05.131 00:11:05.131 ' 00:11:05.131 14:57:28 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.131 14:57:28 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.131 14:57:28 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:05.131 14:57:28 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:05.131 14:57:28 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.131 14:57:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.131 14:57:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.131 14:57:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.131 14:57:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.131 14:57:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.131 14:57:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.131 14:57:28 -- paths/export.sh@5 -- # export PATH 00:11:05.131 14:57:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.131 14:57:28 -- nvme/functions.sh@10 -- # ctrls=() 00:11:05.131 14:57:28 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:05.131 14:57:28 -- nvme/functions.sh@11 -- # nvmes=() 00:11:05.131 14:57:28 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:05.131 14:57:28 -- nvme/functions.sh@12 -- # bdfs=() 00:11:05.131 14:57:28 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:05.131 14:57:28 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:05.131 14:57:28 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:05.131 14:57:28 -- nvme/functions.sh@14 -- # nvme_name= 00:11:05.132 14:57:28 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.132 14:57:28 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:05.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.704 Waiting for block devices as requested 00:11:05.704 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.704 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.965 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.965 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.261 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:11.261 14:57:34 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:11.261 14:57:34 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:11.261 14:57:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.261 14:57:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:11.261 14:57:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:11.261 14:57:34 -- scripts/common.sh@15 -- # local i 00:11:11.261 14:57:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:11.261 14:57:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.261 14:57:34 -- scripts/common.sh@24 -- # return 0 00:11:11.261 14:57:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:11.261 14:57:34 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:11.261 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.261 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.261 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:11.261 14:57:34 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.261 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:11.262 14:57:34 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.262 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.262 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:11.263 14:57:34 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.263 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.263 14:57:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:11.264 14:57:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:11.264 14:57:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:11.264 14:57:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:11.264 14:57:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:11.264 14:57:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.264 14:57:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:11.264 14:57:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:11.264 14:57:34 -- scripts/common.sh@15 -- # local i 00:11:11.264 14:57:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:11.264 14:57:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.264 14:57:34 -- scripts/common.sh@24 -- # return 0 00:11:11.264 14:57:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:11.264 14:57:34 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:11.264 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.264 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.264 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.264 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:11.264 14:57:34 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:11.265 14:57:34 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.265 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.265 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:11.266 14:57:34 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:11.266 14:57:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.266 14:57:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:11.266 14:57:34 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:11.266 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.266 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.266 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:11.266 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.266 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.267 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:11.267 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.267 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:11.268 14:57:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.268 14:57:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:11.268 14:57:34 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:11.268 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.268 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.268 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:11.268 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:11.268 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:11.269 14:57:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.269 14:57:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:11.269 14:57:34 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:11.269 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.269 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.269 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:11.269 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:11.269 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.270 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.270 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.270 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:11.271 14:57:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:11.271 14:57:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:11.271 14:57:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:11.271 14:57:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:11.271 14:57:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.271 14:57:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:11.271 14:57:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:11.271 14:57:34 -- scripts/common.sh@15 -- # local i 00:11:11.271 14:57:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:11.271 14:57:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.271 14:57:34 -- scripts/common.sh@24 -- # return 0 00:11:11.271 14:57:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:11.271 14:57:34 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:11.271 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.271 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.271 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.271 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.271 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.272 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:11.272 14:57:34 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.272 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.273 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.273 14:57:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.273 14:57:34 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:11.274 14:57:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.274 14:57:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:11.274 14:57:34 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:11.274 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.274 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:11.274 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.274 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.274 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:11.275 14:57:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:11.275 14:57:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:11.275 14:57:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:11.275 14:57:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:11.275 14:57:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.275 14:57:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:11.275 14:57:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:11.275 14:57:34 -- scripts/common.sh@15 -- # local i 00:11:11.275 14:57:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:11.275 14:57:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.275 14:57:34 -- scripts/common.sh@24 -- # return 0 00:11:11.275 14:57:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:11.275 14:57:34 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:11.275 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.275 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.275 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.275 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:11.275 14:57:34 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.276 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.276 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.276 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.277 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:11.277 14:57:34 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:11.277 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:11.278 14:57:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.278 14:57:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:11.278 14:57:34 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:11.278 14:57:34 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@18 -- # shift 00:11:11.278 14:57:34 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.278 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:11.278 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.278 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:11.279 14:57:34 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:11.279 14:57:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.279 14:57:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:11.279 14:57:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:11.279 14:57:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:11.279 14:57:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:11.279 14:57:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:11.279 14:57:34 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:11.279 14:57:34 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:11.279 14:57:34 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:11.279 14:57:34 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:11.279 14:57:34 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:11.279 14:57:34 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:11.279 14:57:34 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:11.279 14:57:34 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:11.279 14:57:34 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:11.279 14:57:34 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:11.279 14:57:34 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:11.279 14:57:34 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:11.279 14:57:34 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:11.279 14:57:34 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:11.279 14:57:34 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:11.279 14:57:34 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:11.279 14:57:34 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:11.279 14:57:34 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:11.279 14:57:34 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:11.279 14:57:34 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:11.279 14:57:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:11.279 14:57:34 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:11.279 14:57:34 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:11.280 14:57:34 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:11.280 14:57:34 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:11.280 14:57:34 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:11.280 14:57:34 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:11.280 14:57:34 -- nvme/functions.sh@197 -- # echo nvme0 00:11:11.280 14:57:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:11.280 14:57:34 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:11.280 14:57:34 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:11.280 14:57:34 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:11.280 14:57:34 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:11.280 14:57:34 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:11.280 14:57:34 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:11.280 14:57:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:11.280 14:57:34 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:11.280 14:57:34 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:11.280 14:57:34 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:11.280 14:57:34 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:11.280 14:57:34 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:11.280 14:57:34 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:11.280 14:57:34 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:11.280 14:57:34 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:11.280 14:57:34 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:11.280 14:57:34 -- nvme/functions.sh@204 -- # trap - ERR 00:11:11.280 14:57:34 -- nvme/functions.sh@204 -- # print_backtrace 00:11:11.280 14:57:34 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:11.280 14:57:34 -- common/autotest_common.sh@1142 -- # return 0 00:11:11.280 14:57:34 -- nvme/functions.sh@204 -- # trap - ERR 00:11:11.280 14:57:34 -- nvme/functions.sh@204 -- # print_backtrace 00:11:11.280 14:57:34 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:11.280 14:57:34 -- common/autotest_common.sh@1142 -- # return 0 00:11:11.280 14:57:34 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:11.280 14:57:34 -- nvme/functions.sh@206 -- # echo nvme0 00:11:11.280 14:57:34 -- nvme/functions.sh@207 -- # return 0 00:11:11.280 14:57:34 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:11.280 14:57:34 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:11.280 14:57:34 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:12.223 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.223 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.223 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.223 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.223 14:57:35 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:12.223 14:57:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:12.223 14:57:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.223 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:11:12.223 ************************************ 00:11:12.223 START TEST nvme_flexible_data_placement 00:11:12.223 ************************************ 00:11:12.223 14:57:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:12.484 Initializing NVMe Controllers 00:11:12.484 Attaching to 0000:00:09.0 00:11:12.484 Controller supports FDP Attached to 0000:00:09.0 00:11:12.484 Namespace ID: 1 Endurance Group ID: 1 00:11:12.484 Initialization complete. 00:11:12.484 00:11:12.484 ================================== 00:11:12.484 == FDP tests for Namespace: #01 == 00:11:12.484 ================================== 00:11:12.484 00:11:12.484 Get Feature: FDP: 00:11:12.484 ================= 00:11:12.484 Enabled: Yes 00:11:12.484 FDP configuration Index: 0 00:11:12.484 00:11:12.484 FDP configurations log page 00:11:12.484 =========================== 00:11:12.484 Number of FDP configurations: 1 00:11:12.484 Version: 0 00:11:12.484 Size: 112 00:11:12.484 FDP Configuration Descriptor: 0 00:11:12.484 Descriptor Size: 96 00:11:12.484 Reclaim Group Identifier format: 2 00:11:12.484 FDP Volatile Write Cache: Not Present 00:11:12.484 FDP Configuration: Valid 00:11:12.484 Vendor Specific Size: 0 00:11:12.484 Number of Reclaim Groups: 2 00:11:12.484 Number of Recalim Unit Handles: 8 00:11:12.484 Max Placement Identifiers: 128 00:11:12.484 Number of Namespaces Suppprted: 256 00:11:12.484 Reclaim unit Nominal Size: 6000000 bytes 00:11:12.484 Estimated Reclaim Unit Time Limit: Not Reported 00:11:12.484 RUH Desc #000: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #001: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #002: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #003: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #004: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #005: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #006: RUH Type: Initially Isolated 00:11:12.484 RUH Desc #007: RUH Type: Initially Isolated 00:11:12.484 00:11:12.484 FDP reclaim unit handle usage log page 00:11:12.484 ====================================== 00:11:12.484 Number of Reclaim Unit Handles: 8 00:11:12.484 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:12.484 RUH Usage Desc #001: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #002: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #003: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #004: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #005: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #006: RUH Attributes: Unused 00:11:12.484 RUH Usage Desc #007: RUH Attributes: Unused 00:11:12.484 00:11:12.484 FDP statistics log page 00:11:12.484 ======================= 00:11:12.484 Host bytes with metadata written: 1945473024 00:11:12.484 Media bytes with metadata written: 1945767936 00:11:12.484 Media bytes erased: 0 00:11:12.484 00:11:12.484 FDP Reclaim unit handle status 00:11:12.484 ============================== 00:11:12.484 Number of RUHS descriptors: 2 00:11:12.484 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000040a7 00:11:12.484 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:12.484 00:11:12.484 FDP write on placement id: 0 success 00:11:12.484 00:11:12.484 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:12.484 00:11:12.484 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:12.484 00:11:12.484 Get Feature: FDP Events for Placement handle: #0 00:11:12.484 ======================== 00:11:12.484 Number of FDP Events: 6 00:11:12.484 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:12.484 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:12.484 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:12.484 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:12.484 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:12.484 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:12.484 00:11:12.484 FDP events log page 00:11:12.484 =================== 00:11:12.484 Number of FDP events: 1 00:11:12.484 FDP Event #0: 00:11:12.484 Event Type: RU Not Written to Capacity 00:11:12.484 Placement Identifier: Valid 00:11:12.484 NSID: Valid 00:11:12.484 Location: Valid 00:11:12.484 Placement Identifier: 0 00:11:12.484 Event Timestamp: 3 00:11:12.484 Namespace Identifier: 1 00:11:12.484 Reclaim Group Identifier: 0 00:11:12.484 Reclaim Unit Handle Identifier: 0 00:11:12.484 00:11:12.484 FDP test passed 00:11:12.484 00:11:12.484 real 0m0.212s 00:11:12.485 user 0m0.060s 00:11:12.485 sys 0m0.050s 00:11:12.485 14:57:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:12.485 ************************************ 00:11:12.485 END TEST nvme_flexible_data_placement 00:11:12.485 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:11:12.485 ************************************ 00:11:12.485 00:11:12.485 real 0m7.493s 00:11:12.485 user 0m0.999s 00:11:12.485 sys 0m1.413s 00:11:12.485 14:57:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:12.485 14:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:12.485 ************************************ 00:11:12.485 END TEST nvme_fdp 00:11:12.485 ************************************ 00:11:12.485 14:57:36 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:12.485 14:57:36 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:12.485 14:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.485 14:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.485 14:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:12.746 ************************************ 00:11:12.746 START TEST nvme_rpc 00:11:12.746 ************************************ 00:11:12.746 14:57:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:12.746 * Looking for test storage... 00:11:12.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:12.746 14:57:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:12.746 14:57:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:12.746 14:57:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:12.746 14:57:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:12.746 14:57:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:12.746 14:57:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:12.746 14:57:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:12.746 14:57:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:12.746 14:57:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.746 14:57:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:12.746 14:57:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:12.746 14:57:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:12.746 14:57:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:12.746 14:57:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:12.746 14:57:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:12.746 14:57:36 -- scripts/common.sh@344 -- # : 1 00:11:12.746 14:57:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:12.746 14:57:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.746 14:57:36 -- scripts/common.sh@364 -- # decimal 1 00:11:12.746 14:57:36 -- scripts/common.sh@352 -- # local d=1 00:11:12.746 14:57:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.746 14:57:36 -- scripts/common.sh@354 -- # echo 1 00:11:12.746 14:57:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:12.746 14:57:36 -- scripts/common.sh@365 -- # decimal 2 00:11:12.746 14:57:36 -- scripts/common.sh@352 -- # local d=2 00:11:12.746 14:57:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.746 14:57:36 -- scripts/common.sh@354 -- # echo 2 00:11:12.746 14:57:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:12.746 14:57:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:12.746 14:57:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:12.746 14:57:36 -- scripts/common.sh@367 -- # return 0 00:11:12.746 14:57:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:12.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.746 --rc genhtml_branch_coverage=1 00:11:12.746 --rc genhtml_function_coverage=1 00:11:12.746 --rc genhtml_legend=1 00:11:12.746 --rc geninfo_all_blocks=1 00:11:12.746 --rc geninfo_unexecuted_blocks=1 00:11:12.746 00:11:12.746 ' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:12.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.746 --rc genhtml_branch_coverage=1 00:11:12.746 --rc genhtml_function_coverage=1 00:11:12.746 --rc genhtml_legend=1 00:11:12.746 --rc geninfo_all_blocks=1 00:11:12.746 --rc geninfo_unexecuted_blocks=1 00:11:12.746 00:11:12.746 ' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:12.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.746 --rc genhtml_branch_coverage=1 00:11:12.746 --rc genhtml_function_coverage=1 00:11:12.746 --rc genhtml_legend=1 00:11:12.746 --rc geninfo_all_blocks=1 00:11:12.746 --rc geninfo_unexecuted_blocks=1 00:11:12.746 00:11:12.746 ' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:12.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.746 --rc genhtml_branch_coverage=1 00:11:12.746 --rc genhtml_function_coverage=1 00:11:12.746 --rc genhtml_legend=1 00:11:12.746 --rc geninfo_all_blocks=1 00:11:12.746 --rc geninfo_unexecuted_blocks=1 00:11:12.746 00:11:12.746 ' 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:12.746 14:57:36 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:12.746 14:57:36 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:12.746 14:57:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:12.746 14:57:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:12.746 14:57:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:12.746 14:57:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:12.746 14:57:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:12.746 14:57:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:12.746 14:57:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:12.746 14:57:36 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:12.746 14:57:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:12.746 14:57:36 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78161 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:12.746 14:57:36 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78161 00:11:12.746 14:57:36 -- common/autotest_common.sh@829 -- # '[' -z 78161 ']' 00:11:12.746 14:57:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.746 14:57:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.746 14:57:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.746 14:57:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.746 14:57:36 -- common/autotest_common.sh@10 -- # set +x 00:11:13.007 [2024-11-18 14:57:36.365655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:13.008 [2024-11-18 14:57:36.365780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78161 ] 00:11:13.008 [2024-11-18 14:57:36.516002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:13.008 [2024-11-18 14:57:36.558539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:13.008 [2024-11-18 14:57:36.559007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.008 [2024-11-18 14:57:36.559033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.950 14:57:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.950 14:57:37 -- common/autotest_common.sh@862 -- # return 0 00:11:13.950 14:57:37 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:13.950 Nvme0n1 00:11:13.950 14:57:37 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:13.950 14:57:37 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:14.211 request: 00:11:14.211 { 00:11:14.211 "filename": "non_existing_file", 00:11:14.211 "bdev_name": "Nvme0n1", 00:11:14.211 "method": "bdev_nvme_apply_firmware", 00:11:14.211 "req_id": 1 00:11:14.211 } 00:11:14.211 Got JSON-RPC error response 00:11:14.211 response: 00:11:14.211 { 00:11:14.211 "code": -32603, 00:11:14.211 "message": "open file failed." 00:11:14.211 } 00:11:14.211 14:57:37 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:14.211 14:57:37 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:14.211 14:57:37 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:14.471 14:57:37 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:14.471 14:57:37 -- nvme/nvme_rpc.sh@40 -- # killprocess 78161 00:11:14.471 14:57:37 -- common/autotest_common.sh@936 -- # '[' -z 78161 ']' 00:11:14.471 14:57:37 -- common/autotest_common.sh@940 -- # kill -0 78161 00:11:14.471 14:57:37 -- common/autotest_common.sh@941 -- # uname 00:11:14.471 14:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.471 14:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78161 00:11:14.471 14:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:14.471 killing process with pid 78161 00:11:14.471 14:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:14.471 14:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78161' 00:11:14.471 14:57:37 -- common/autotest_common.sh@955 -- # kill 78161 00:11:14.471 14:57:37 -- common/autotest_common.sh@960 -- # wait 78161 00:11:14.733 00:11:14.733 real 0m2.178s 00:11:14.733 user 0m4.176s 00:11:14.733 sys 0m0.545s 00:11:14.733 14:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:14.733 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:14.733 ************************************ 00:11:14.733 END TEST nvme_rpc 00:11:14.733 ************************************ 00:11:14.733 14:57:38 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:14.733 14:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.733 14:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.733 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:14.733 ************************************ 00:11:14.733 START TEST nvme_rpc_timeouts 00:11:14.733 ************************************ 00:11:14.733 14:57:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:14.994 * Looking for test storage... 00:11:14.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.994 14:57:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:14.994 14:57:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:14.994 14:57:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:14.994 14:57:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:14.994 14:57:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:14.994 14:57:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:14.994 14:57:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:14.994 14:57:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:14.994 14:57:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:14.994 14:57:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.994 14:57:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:14.994 14:57:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:14.994 14:57:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:14.994 14:57:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:14.994 14:57:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:14.994 14:57:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:14.994 14:57:38 -- scripts/common.sh@344 -- # : 1 00:11:14.994 14:57:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:14.994 14:57:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.994 14:57:38 -- scripts/common.sh@364 -- # decimal 1 00:11:14.994 14:57:38 -- scripts/common.sh@352 -- # local d=1 00:11:14.994 14:57:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.994 14:57:38 -- scripts/common.sh@354 -- # echo 1 00:11:14.994 14:57:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:14.994 14:57:38 -- scripts/common.sh@365 -- # decimal 2 00:11:14.994 14:57:38 -- scripts/common.sh@352 -- # local d=2 00:11:14.994 14:57:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.994 14:57:38 -- scripts/common.sh@354 -- # echo 2 00:11:14.994 14:57:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:14.994 14:57:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:14.994 14:57:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:14.994 14:57:38 -- scripts/common.sh@367 -- # return 0 00:11:14.994 14:57:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.994 14:57:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.994 --rc genhtml_branch_coverage=1 00:11:14.994 --rc genhtml_function_coverage=1 00:11:14.994 --rc genhtml_legend=1 00:11:14.994 --rc geninfo_all_blocks=1 00:11:14.994 --rc geninfo_unexecuted_blocks=1 00:11:14.994 00:11:14.994 ' 00:11:14.994 14:57:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.994 --rc genhtml_branch_coverage=1 00:11:14.994 --rc genhtml_function_coverage=1 00:11:14.994 --rc genhtml_legend=1 00:11:14.994 --rc geninfo_all_blocks=1 00:11:14.994 --rc geninfo_unexecuted_blocks=1 00:11:14.994 00:11:14.994 ' 00:11:14.994 14:57:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.994 --rc genhtml_branch_coverage=1 00:11:14.994 --rc genhtml_function_coverage=1 00:11:14.994 --rc genhtml_legend=1 00:11:14.994 --rc geninfo_all_blocks=1 00:11:14.994 --rc geninfo_unexecuted_blocks=1 00:11:14.994 00:11:14.994 ' 00:11:14.994 14:57:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:14.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.994 --rc genhtml_branch_coverage=1 00:11:14.994 --rc genhtml_function_coverage=1 00:11:14.994 --rc genhtml_legend=1 00:11:14.994 --rc geninfo_all_blocks=1 00:11:14.994 --rc geninfo_unexecuted_blocks=1 00:11:14.994 00:11:14.994 ' 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78215 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78215 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78246 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78246 00:11:14.994 14:57:38 -- common/autotest_common.sh@829 -- # '[' -z 78246 ']' 00:11:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.994 14:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.994 14:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.994 14:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.994 14:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.994 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:14.994 14:57:38 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:14.994 [2024-11-18 14:57:38.530639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:14.994 [2024-11-18 14:57:38.530766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78246 ] 00:11:15.260 [2024-11-18 14:57:38.678743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.260 [2024-11-18 14:57:38.719918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.260 [2024-11-18 14:57:38.720420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.260 [2024-11-18 14:57:38.720455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.835 Checking default timeout settings: 00:11:15.835 14:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.835 14:57:39 -- common/autotest_common.sh@862 -- # return 0 00:11:15.835 14:57:39 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:15.835 14:57:39 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:16.096 Making settings changes with rpc: 00:11:16.096 14:57:39 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:16.096 14:57:39 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:16.357 Check default vs. modified settings: 00:11:16.357 14:57:39 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:16.357 14:57:39 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:16.618 Setting action_on_timeout is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 Setting timeout_us is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:16.618 Setting timeout_admin_us is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78215 /tmp/settings_modified_78215 00:11:16.618 14:57:40 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78246 00:11:16.618 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 78246 ']' 00:11:16.618 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 78246 00:11:16.618 14:57:40 -- common/autotest_common.sh@941 -- # uname 00:11:16.618 14:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.618 14:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78246 00:11:16.618 killing process with pid 78246 00:11:16.618 14:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.618 14:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.618 14:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78246' 00:11:16.618 14:57:40 -- common/autotest_common.sh@955 -- # kill 78246 00:11:16.618 14:57:40 -- common/autotest_common.sh@960 -- # wait 78246 00:11:17.217 RPC TIMEOUT SETTING TEST PASSED. 00:11:17.217 14:57:40 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:17.217 ************************************ 00:11:17.217 END TEST nvme_rpc_timeouts 00:11:17.217 ************************************ 00:11:17.217 00:11:17.217 real 0m2.217s 00:11:17.217 user 0m4.314s 00:11:17.217 sys 0m0.510s 00:11:17.217 14:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.217 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:11:17.217 14:57:40 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:17.217 14:57:40 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:17.217 14:57:40 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:17.217 14:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.217 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:11:17.217 ************************************ 00:11:17.217 START TEST nvme_xnvme 00:11:17.217 ************************************ 00:11:17.217 14:57:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:17.217 * Looking for test storage... 00:11:17.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:17.217 14:57:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:17.217 14:57:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:17.217 14:57:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:17.217 14:57:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:17.217 14:57:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:17.217 14:57:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:17.217 14:57:40 -- scripts/common.sh@335 -- # IFS=.-: 00:11:17.217 14:57:40 -- scripts/common.sh@335 -- # read -ra ver1 00:11:17.217 14:57:40 -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.217 14:57:40 -- scripts/common.sh@336 -- # read -ra ver2 00:11:17.217 14:57:40 -- scripts/common.sh@337 -- # local 'op=<' 00:11:17.217 14:57:40 -- scripts/common.sh@339 -- # ver1_l=2 00:11:17.217 14:57:40 -- scripts/common.sh@340 -- # ver2_l=1 00:11:17.217 14:57:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:17.217 14:57:40 -- scripts/common.sh@343 -- # case "$op" in 00:11:17.217 14:57:40 -- scripts/common.sh@344 -- # : 1 00:11:17.217 14:57:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:17.217 14:57:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.217 14:57:40 -- scripts/common.sh@364 -- # decimal 1 00:11:17.217 14:57:40 -- scripts/common.sh@352 -- # local d=1 00:11:17.217 14:57:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.217 14:57:40 -- scripts/common.sh@354 -- # echo 1 00:11:17.217 14:57:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:17.217 14:57:40 -- scripts/common.sh@365 -- # decimal 2 00:11:17.217 14:57:40 -- scripts/common.sh@352 -- # local d=2 00:11:17.217 14:57:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.217 14:57:40 -- scripts/common.sh@354 -- # echo 2 00:11:17.217 14:57:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:17.217 14:57:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:17.217 14:57:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:17.217 14:57:40 -- scripts/common.sh@367 -- # return 0 00:11:17.217 14:57:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:17.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.217 --rc genhtml_branch_coverage=1 00:11:17.217 --rc genhtml_function_coverage=1 00:11:17.217 --rc genhtml_legend=1 00:11:17.217 --rc geninfo_all_blocks=1 00:11:17.217 --rc geninfo_unexecuted_blocks=1 00:11:17.217 00:11:17.217 ' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:17.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.217 --rc genhtml_branch_coverage=1 00:11:17.217 --rc genhtml_function_coverage=1 00:11:17.217 --rc genhtml_legend=1 00:11:17.217 --rc geninfo_all_blocks=1 00:11:17.217 --rc geninfo_unexecuted_blocks=1 00:11:17.217 00:11:17.217 ' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:17.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.217 --rc genhtml_branch_coverage=1 00:11:17.217 --rc genhtml_function_coverage=1 00:11:17.217 --rc genhtml_legend=1 00:11:17.217 --rc geninfo_all_blocks=1 00:11:17.217 --rc geninfo_unexecuted_blocks=1 00:11:17.217 00:11:17.217 ' 00:11:17.217 14:57:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:17.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.217 --rc genhtml_branch_coverage=1 00:11:17.217 --rc genhtml_function_coverage=1 00:11:17.217 --rc genhtml_legend=1 00:11:17.217 --rc geninfo_all_blocks=1 00:11:17.217 --rc geninfo_unexecuted_blocks=1 00:11:17.217 00:11:17.217 ' 00:11:17.217 14:57:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.217 14:57:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.217 14:57:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.217 14:57:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.217 14:57:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.218 14:57:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.218 14:57:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.218 14:57:40 -- paths/export.sh@5 -- # export PATH 00:11:17.218 14:57:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:17.218 14:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.218 14:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.218 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:11:17.218 ************************************ 00:11:17.218 START TEST xnvme_to_malloc_dd_copy 00:11:17.218 ************************************ 00:11:17.218 14:57:40 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:17.218 14:57:40 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:17.218 14:57:40 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:17.218 14:57:40 -- dd/common.sh@191 -- # return 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@18 -- # local io 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:17.218 14:57:40 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:17.218 14:57:40 -- dd/common.sh@31 -- # xtrace_disable 00:11:17.218 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:11:17.479 { 00:11:17.479 "subsystems": [ 00:11:17.479 { 00:11:17.479 "subsystem": "bdev", 00:11:17.479 "config": [ 00:11:17.480 { 00:11:17.480 "params": { 00:11:17.480 "block_size": 512, 00:11:17.480 "num_blocks": 2097152, 00:11:17.480 "name": "malloc0" 00:11:17.480 }, 00:11:17.480 "method": "bdev_malloc_create" 00:11:17.480 }, 00:11:17.480 { 00:11:17.480 "params": { 00:11:17.480 "io_mechanism": "libaio", 00:11:17.480 "filename": "/dev/nullb0", 00:11:17.480 "name": "null0" 00:11:17.480 }, 00:11:17.480 "method": "bdev_xnvme_create" 00:11:17.480 }, 00:11:17.480 { 00:11:17.480 "method": "bdev_wait_for_examine" 00:11:17.480 } 00:11:17.480 ] 00:11:17.480 } 00:11:17.480 ] 00:11:17.480 } 00:11:17.480 [2024-11-18 14:57:40.837413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.480 [2024-11-18 14:57:40.837534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78362 ] 00:11:17.480 [2024-11-18 14:57:40.986358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.480 [2024-11-18 14:57:41.055461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.395  [2024-11-18T14:57:43.556Z] Copying: 229/1024 [MB] (229 MBps) [2024-11-18T14:57:44.930Z] Copying: 474/1024 [MB] (245 MBps) [2024-11-18T14:57:45.497Z] Copying: 788/1024 [MB] (313 MBps) [2024-11-18T14:57:45.758Z] Copying: 1024/1024 [MB] (average 272 MBps) 00:11:22.168 00:11:22.168 14:57:45 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:22.168 14:57:45 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:22.168 14:57:45 -- dd/common.sh@31 -- # xtrace_disable 00:11:22.168 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:11:22.427 { 00:11:22.428 "subsystems": [ 00:11:22.428 { 00:11:22.428 "subsystem": "bdev", 00:11:22.428 "config": [ 00:11:22.428 { 00:11:22.428 "params": { 00:11:22.428 "block_size": 512, 00:11:22.428 "num_blocks": 2097152, 00:11:22.428 "name": "malloc0" 00:11:22.428 }, 00:11:22.428 "method": "bdev_malloc_create" 00:11:22.428 }, 00:11:22.428 { 00:11:22.428 "params": { 00:11:22.428 "io_mechanism": "libaio", 00:11:22.428 "filename": "/dev/nullb0", 00:11:22.428 "name": "null0" 00:11:22.428 }, 00:11:22.428 "method": "bdev_xnvme_create" 00:11:22.428 }, 00:11:22.428 { 00:11:22.428 "method": "bdev_wait_for_examine" 00:11:22.428 } 00:11:22.428 ] 00:11:22.428 } 00:11:22.428 ] 00:11:22.428 } 00:11:22.428 [2024-11-18 14:57:45.804281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:22.428 [2024-11-18 14:57:45.804408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78429 ] 00:11:22.428 [2024-11-18 14:57:45.953468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.428 [2024-11-18 14:57:46.002518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.804  [2024-11-18T14:57:48.340Z] Copying: 314/1024 [MB] (314 MBps) [2024-11-18T14:57:49.722Z] Copying: 583/1024 [MB] (268 MBps) [2024-11-18T14:57:49.980Z] Copying: 820/1024 [MB] (236 MBps) [2024-11-18T14:57:50.548Z] Copying: 1024/1024 [MB] (average 280 MBps) 00:11:26.958 00:11:26.958 14:57:50 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:26.958 14:57:50 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:26.958 14:57:50 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:26.958 14:57:50 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:26.958 14:57:50 -- dd/common.sh@31 -- # xtrace_disable 00:11:26.958 14:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:26.958 { 00:11:26.958 "subsystems": [ 00:11:26.958 { 00:11:26.958 "subsystem": "bdev", 00:11:26.958 "config": [ 00:11:26.958 { 00:11:26.958 "params": { 00:11:26.958 "block_size": 512, 00:11:26.958 "num_blocks": 2097152, 00:11:26.958 "name": "malloc0" 00:11:26.958 }, 00:11:26.958 "method": "bdev_malloc_create" 00:11:26.958 }, 00:11:26.958 { 00:11:26.958 "params": { 00:11:26.958 "io_mechanism": "io_uring", 00:11:26.958 "filename": "/dev/nullb0", 00:11:26.958 "name": "null0" 00:11:26.958 }, 00:11:26.958 "method": "bdev_xnvme_create" 00:11:26.958 }, 00:11:26.958 { 00:11:26.958 "method": "bdev_wait_for_examine" 00:11:26.958 } 00:11:26.958 ] 00:11:26.958 } 00:11:26.958 ] 00:11:26.958 } 00:11:26.958 [2024-11-18 14:57:50.461604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:26.958 [2024-11-18 14:57:50.462018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78489 ] 00:11:27.217 [2024-11-18 14:57:50.609448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.217 [2024-11-18 14:57:50.660095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.589  [2024-11-18T14:57:53.114Z] Copying: 322/1024 [MB] (322 MBps) [2024-11-18T14:57:54.081Z] Copying: 645/1024 [MB] (323 MBps) [2024-11-18T14:57:54.340Z] Copying: 968/1024 [MB] (323 MBps) [2024-11-18T14:57:54.600Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:11:31.010 00:11:31.010 14:57:54 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:31.010 14:57:54 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:31.010 14:57:54 -- dd/common.sh@31 -- # xtrace_disable 00:11:31.010 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:11:31.010 { 00:11:31.010 "subsystems": [ 00:11:31.010 { 00:11:31.010 "subsystem": "bdev", 00:11:31.010 "config": [ 00:11:31.010 { 00:11:31.010 "params": { 00:11:31.010 "block_size": 512, 00:11:31.010 "num_blocks": 2097152, 00:11:31.010 "name": "malloc0" 00:11:31.010 }, 00:11:31.010 "method": "bdev_malloc_create" 00:11:31.010 }, 00:11:31.010 { 00:11:31.010 "params": { 00:11:31.010 "io_mechanism": "io_uring", 00:11:31.010 "filename": "/dev/nullb0", 00:11:31.010 "name": "null0" 00:11:31.010 }, 00:11:31.010 "method": "bdev_xnvme_create" 00:11:31.010 }, 00:11:31.010 { 00:11:31.010 "method": "bdev_wait_for_examine" 00:11:31.010 } 00:11:31.010 ] 00:11:31.010 } 00:11:31.010 ] 00:11:31.010 } 00:11:31.269 [2024-11-18 14:57:54.607785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:31.269 [2024-11-18 14:57:54.607904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78543 ] 00:11:31.269 [2024-11-18 14:57:54.759509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.269 [2024-11-18 14:57:54.818584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.645  [2024-11-18T14:57:57.168Z] Copying: 237/1024 [MB] (237 MBps) [2024-11-18T14:57:58.541Z] Copying: 536/1024 [MB] (299 MBps) [2024-11-18T14:57:58.800Z] Copying: 865/1024 [MB] (328 MBps) [2024-11-18T14:57:59.058Z] Copying: 1024/1024 [MB] (average 293 MBps) 00:11:35.468 00:11:35.468 14:57:59 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:35.468 14:57:59 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:35.727 00:11:35.727 real 0m18.312s 00:11:35.727 user 0m14.793s 00:11:35.727 sys 0m3.000s 00:11:35.727 ************************************ 00:11:35.727 END TEST xnvme_to_malloc_dd_copy 00:11:35.727 ************************************ 00:11:35.727 14:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:35.727 14:57:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:35.727 14:57:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:35.727 14:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.727 14:57:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.727 ************************************ 00:11:35.727 START TEST xnvme_bdevperf 00:11:35.727 ************************************ 00:11:35.727 14:57:59 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:35.727 14:57:59 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:35.727 14:57:59 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:35.727 14:57:59 -- dd/common.sh@191 -- # return 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@60 -- # local io 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:35.727 14:57:59 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:35.727 14:57:59 -- dd/common.sh@31 -- # xtrace_disable 00:11:35.727 14:57:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.727 { 00:11:35.727 "subsystems": [ 00:11:35.727 { 00:11:35.727 "subsystem": "bdev", 00:11:35.727 "config": [ 00:11:35.727 { 00:11:35.727 "params": { 00:11:35.727 "io_mechanism": "libaio", 00:11:35.727 "filename": "/dev/nullb0", 00:11:35.727 "name": "null0" 00:11:35.727 }, 00:11:35.727 "method": "bdev_xnvme_create" 00:11:35.727 }, 00:11:35.727 { 00:11:35.727 "method": "bdev_wait_for_examine" 00:11:35.727 } 00:11:35.727 ] 00:11:35.727 } 00:11:35.727 ] 00:11:35.727 } 00:11:35.727 [2024-11-18 14:57:59.187545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:35.727 [2024-11-18 14:57:59.187665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78620 ] 00:11:35.985 [2024-11-18 14:57:59.336035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.985 [2024-11-18 14:57:59.377651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.985 Running I/O for 5 seconds... 00:11:41.256 00:11:41.256 Latency(us) 00:11:41.256 [2024-11-18T14:58:04.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.256 [2024-11-18T14:58:04.847Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:41.257 null0 : 5.00 197243.05 770.48 0.00 0.00 322.33 118.15 507.27 00:11:41.257 [2024-11-18T14:58:04.847Z] =================================================================================================================== 00:11:41.257 [2024-11-18T14:58:04.847Z] Total : 197243.05 770.48 0.00 0.00 322.33 118.15 507.27 00:11:41.257 14:58:04 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:41.257 14:58:04 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:41.257 14:58:04 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:41.257 14:58:04 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:41.257 14:58:04 -- dd/common.sh@31 -- # xtrace_disable 00:11:41.257 14:58:04 -- common/autotest_common.sh@10 -- # set +x 00:11:41.257 { 00:11:41.257 "subsystems": [ 00:11:41.257 { 00:11:41.257 "subsystem": "bdev", 00:11:41.257 "config": [ 00:11:41.257 { 00:11:41.257 "params": { 00:11:41.257 "io_mechanism": "io_uring", 00:11:41.257 "filename": "/dev/nullb0", 00:11:41.257 "name": "null0" 00:11:41.257 }, 00:11:41.257 "method": "bdev_xnvme_create" 00:11:41.257 }, 00:11:41.257 { 00:11:41.257 "method": "bdev_wait_for_examine" 00:11:41.257 } 00:11:41.257 ] 00:11:41.257 } 00:11:41.257 ] 00:11:41.257 } 00:11:41.257 [2024-11-18 14:58:04.741070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:41.257 [2024-11-18 14:58:04.741199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78689 ] 00:11:41.515 [2024-11-18 14:58:04.890294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.515 [2024-11-18 14:58:04.946107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.515 Running I/O for 5 seconds... 00:11:46.899 00:11:46.899 Latency(us) 00:11:46.899 [2024-11-18T14:58:10.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.899 [2024-11-18T14:58:10.489Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:46.899 null0 : 5.00 244731.30 955.98 0.00 0.00 259.55 160.69 614.40 00:11:46.899 [2024-11-18T14:58:10.489Z] =================================================================================================================== 00:11:46.899 [2024-11-18T14:58:10.489Z] Total : 244731.30 955.98 0.00 0.00 259.55 160.69 614.40 00:11:46.899 14:58:10 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:46.899 14:58:10 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:46.899 00:11:46.899 real 0m11.133s 00:11:46.899 user 0m8.697s 00:11:46.899 sys 0m2.185s 00:11:46.899 14:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.899 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:46.899 ************************************ 00:11:46.899 END TEST xnvme_bdevperf 00:11:46.899 ************************************ 00:11:46.899 00:11:46.899 real 0m29.697s 00:11:46.899 user 0m23.616s 00:11:46.899 sys 0m5.291s 00:11:46.899 14:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.899 ************************************ 00:11:46.899 END TEST nvme_xnvme 00:11:46.899 ************************************ 00:11:46.899 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:46.899 14:58:10 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:46.899 14:58:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.899 14:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.899 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:46.899 ************************************ 00:11:46.899 START TEST blockdev_xnvme 00:11:46.899 ************************************ 00:11:46.899 14:58:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:46.899 * Looking for test storage... 00:11:46.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:46.899 14:58:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:46.899 14:58:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:46.899 14:58:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:46.899 14:58:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:46.899 14:58:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:46.899 14:58:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:46.899 14:58:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:46.899 14:58:10 -- scripts/common.sh@335 -- # IFS=.-: 00:11:46.899 14:58:10 -- scripts/common.sh@335 -- # read -ra ver1 00:11:46.899 14:58:10 -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.899 14:58:10 -- scripts/common.sh@336 -- # read -ra ver2 00:11:46.899 14:58:10 -- scripts/common.sh@337 -- # local 'op=<' 00:11:46.899 14:58:10 -- scripts/common.sh@339 -- # ver1_l=2 00:11:46.899 14:58:10 -- scripts/common.sh@340 -- # ver2_l=1 00:11:46.900 14:58:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:46.900 14:58:10 -- scripts/common.sh@343 -- # case "$op" in 00:11:46.900 14:58:10 -- scripts/common.sh@344 -- # : 1 00:11:46.900 14:58:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:46.900 14:58:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.900 14:58:10 -- scripts/common.sh@364 -- # decimal 1 00:11:46.900 14:58:10 -- scripts/common.sh@352 -- # local d=1 00:11:46.900 14:58:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.900 14:58:10 -- scripts/common.sh@354 -- # echo 1 00:11:46.900 14:58:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:46.900 14:58:10 -- scripts/common.sh@365 -- # decimal 2 00:11:47.160 14:58:10 -- scripts/common.sh@352 -- # local d=2 00:11:47.160 14:58:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.160 14:58:10 -- scripts/common.sh@354 -- # echo 2 00:11:47.160 14:58:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:47.160 14:58:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:47.160 14:58:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:47.160 14:58:10 -- scripts/common.sh@367 -- # return 0 00:11:47.160 14:58:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.160 14:58:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.160 --rc genhtml_branch_coverage=1 00:11:47.160 --rc genhtml_function_coverage=1 00:11:47.160 --rc genhtml_legend=1 00:11:47.160 --rc geninfo_all_blocks=1 00:11:47.160 --rc geninfo_unexecuted_blocks=1 00:11:47.160 00:11:47.160 ' 00:11:47.160 14:58:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.160 --rc genhtml_branch_coverage=1 00:11:47.160 --rc genhtml_function_coverage=1 00:11:47.160 --rc genhtml_legend=1 00:11:47.160 --rc geninfo_all_blocks=1 00:11:47.160 --rc geninfo_unexecuted_blocks=1 00:11:47.160 00:11:47.160 ' 00:11:47.160 14:58:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.160 --rc genhtml_branch_coverage=1 00:11:47.160 --rc genhtml_function_coverage=1 00:11:47.160 --rc genhtml_legend=1 00:11:47.160 --rc geninfo_all_blocks=1 00:11:47.160 --rc geninfo_unexecuted_blocks=1 00:11:47.160 00:11:47.160 ' 00:11:47.160 14:58:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.160 --rc genhtml_branch_coverage=1 00:11:47.160 --rc genhtml_function_coverage=1 00:11:47.160 --rc genhtml_legend=1 00:11:47.160 --rc geninfo_all_blocks=1 00:11:47.160 --rc geninfo_unexecuted_blocks=1 00:11:47.160 00:11:47.160 ' 00:11:47.160 14:58:10 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:47.160 14:58:10 -- bdev/nbd_common.sh@6 -- # set -e 00:11:47.160 14:58:10 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:47.160 14:58:10 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:47.160 14:58:10 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:47.160 14:58:10 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:47.160 14:58:10 -- bdev/blockdev.sh@18 -- # : 00:11:47.160 14:58:10 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:47.160 14:58:10 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:47.160 14:58:10 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:47.160 14:58:10 -- bdev/blockdev.sh@672 -- # uname -s 00:11:47.160 14:58:10 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:47.160 14:58:10 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:47.160 14:58:10 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:47.160 14:58:10 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:47.160 14:58:10 -- bdev/blockdev.sh@682 -- # dek= 00:11:47.160 14:58:10 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:47.160 14:58:10 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:47.160 14:58:10 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:47.160 14:58:10 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:47.160 14:58:10 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:47.160 14:58:10 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:47.160 14:58:10 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=78824 00:11:47.160 14:58:10 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:47.160 14:58:10 -- bdev/blockdev.sh@47 -- # waitforlisten 78824 00:11:47.160 14:58:10 -- common/autotest_common.sh@829 -- # '[' -z 78824 ']' 00:11:47.160 14:58:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.160 14:58:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.161 14:58:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.161 14:58:10 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:47.161 14:58:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.161 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:11:47.161 [2024-11-18 14:58:10.585836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:47.161 [2024-11-18 14:58:10.586002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78824 ] 00:11:47.161 [2024-11-18 14:58:10.741772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.421 [2024-11-18 14:58:10.811217] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.421 [2024-11-18 14:58:10.811488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.994 14:58:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.994 14:58:11 -- common/autotest_common.sh@862 -- # return 0 00:11:47.994 14:58:11 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:47.994 14:58:11 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:47.994 14:58:11 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:47.994 14:58:11 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:47.994 14:58:11 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:48.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:48.518 Waiting for block devices as requested 00:11:48.518 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.518 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.518 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.518 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.788 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:53.788 14:58:17 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:53.788 14:58:17 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:53.788 14:58:17 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:53.788 14:58:17 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:53.788 14:58:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:53.788 14:58:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:53.788 14:58:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.788 14:58:17 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:53.788 14:58:17 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:53.789 14:58:17 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:53.789 14:58:17 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:53.789 nvme0n1 00:11:53.789 nvme1n1 00:11:53.789 nvme1n2 00:11:53.789 nvme1n3 00:11:53.789 nvme2n1 00:11:53.789 nvme3n1 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@738 -- # cat 00:11:53.789 14:58:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:53.789 14:58:17 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:53.789 14:58:17 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:53.789 14:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.789 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:53.789 14:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.789 14:58:17 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:53.789 14:58:17 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "25114c04-a474-4a29-901f-3ac5e596f7eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "25114c04-a474-4a29-901f-3ac5e596f7eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "adc4d5d1-a8de-4d24-a07e-82af547f46bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "adc4d5d1-a8de-4d24-a07e-82af547f46bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "4bd8a14d-c09d-474b-88e9-fd9af3e49c8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bd8a14d-c09d-474b-88e9-fd9af3e49c8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d9a43a8f-9008-46f4-b0d9-09704efb4b0c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9a43a8f-9008-46f4-b0d9-09704efb4b0c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "be98427f-4c93-4b62-8247-34fab359e4c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "be98427f-4c93-4b62-8247-34fab359e4c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "affb863b-d9e0-4710-932c-809aa4862844"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "affb863b-d9e0-4710-932c-809aa4862844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:53.789 14:58:17 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:53.789 14:58:17 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:53.789 14:58:17 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:53.789 14:58:17 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:53.789 14:58:17 -- bdev/blockdev.sh@752 -- # killprocess 78824 00:11:53.789 14:58:17 -- common/autotest_common.sh@936 -- # '[' -z 78824 ']' 00:11:53.789 14:58:17 -- common/autotest_common.sh@940 -- # kill -0 78824 00:11:53.789 14:58:17 -- common/autotest_common.sh@941 -- # uname 00:11:53.789 14:58:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.789 14:58:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78824 00:11:54.047 14:58:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.047 14:58:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.047 killing process with pid 78824 00:11:54.047 14:58:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78824' 00:11:54.047 14:58:17 -- common/autotest_common.sh@955 -- # kill 78824 00:11:54.047 14:58:17 -- common/autotest_common.sh@960 -- # wait 78824 00:11:54.307 14:58:17 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:54.307 14:58:17 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:54.307 14:58:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:54.307 14:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.307 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:11:54.307 ************************************ 00:11:54.307 START TEST bdev_hello_world 00:11:54.307 ************************************ 00:11:54.307 14:58:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:54.308 [2024-11-18 14:58:17.740996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:54.308 [2024-11-18 14:58:17.741116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79192 ] 00:11:54.308 [2024-11-18 14:58:17.889364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.566 [2024-11-18 14:58:17.927936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.566 [2024-11-18 14:58:18.100109] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:54.566 [2024-11-18 14:58:18.100158] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:54.566 [2024-11-18 14:58:18.100170] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:54.566 [2024-11-18 14:58:18.101772] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:54.566 [2024-11-18 14:58:18.102096] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:54.566 [2024-11-18 14:58:18.102117] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:54.566 [2024-11-18 14:58:18.102293] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:54.566 00:11:54.566 [2024-11-18 14:58:18.102329] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:54.826 00:11:54.826 real 0m0.586s 00:11:54.826 user 0m0.318s 00:11:54.826 sys 0m0.159s 00:11:54.826 14:58:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.826 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 ************************************ 00:11:54.826 END TEST bdev_hello_world 00:11:54.826 ************************************ 00:11:54.826 14:58:18 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:54.826 14:58:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:54.826 14:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.826 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 ************************************ 00:11:54.826 START TEST bdev_bounds 00:11:54.826 ************************************ 00:11:54.826 14:58:18 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:54.826 14:58:18 -- bdev/blockdev.sh@288 -- # bdevio_pid=79219 00:11:54.826 14:58:18 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:54.826 Process bdevio pid: 79219 00:11:54.826 14:58:18 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 79219' 00:11:54.826 14:58:18 -- bdev/blockdev.sh@291 -- # waitforlisten 79219 00:11:54.826 14:58:18 -- common/autotest_common.sh@829 -- # '[' -z 79219 ']' 00:11:54.826 14:58:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.826 14:58:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.826 14:58:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.826 14:58:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.826 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 14:58:18 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:54.826 [2024-11-18 14:58:18.369926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:54.826 [2024-11-18 14:58:18.370038] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79219 ] 00:11:55.085 [2024-11-18 14:58:18.517600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.085 [2024-11-18 14:58:18.558100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.085 [2024-11-18 14:58:18.558478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.085 [2024-11-18 14:58:18.558539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.652 14:58:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.652 14:58:19 -- common/autotest_common.sh@862 -- # return 0 00:11:55.652 14:58:19 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:55.911 I/O targets: 00:11:55.911 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:55.911 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:55.911 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:55.911 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:55.911 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:55.911 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:55.911 00:11:55.911 00:11:55.911 CUnit - A unit testing framework for C - Version 2.1-3 00:11:55.911 http://cunit.sourceforge.net/ 00:11:55.911 00:11:55.911 00:11:55.911 Suite: bdevio tests on: nvme3n1 00:11:55.911 Test: blockdev write read block ...passed 00:11:55.911 Test: blockdev write zeroes read block ...passed 00:11:55.911 Test: blockdev write zeroes read no split ...passed 00:11:55.911 Test: blockdev write zeroes read split ...passed 00:11:55.911 Test: blockdev write zeroes read split partial ...passed 00:11:55.911 Test: blockdev reset ...passed 00:11:55.911 Test: blockdev write read 8 blocks ...passed 00:11:55.911 Test: blockdev write read size > 128k ...passed 00:11:55.911 Test: blockdev write read invalid size ...passed 00:11:55.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.911 Test: blockdev write read max offset ...passed 00:11:55.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.911 Test: blockdev writev readv 8 blocks ...passed 00:11:55.911 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.911 Test: blockdev writev readv block ...passed 00:11:55.911 Test: blockdev writev readv size > 128k ...passed 00:11:55.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.911 Test: blockdev comparev and writev ...passed 00:11:55.911 Test: blockdev nvme passthru rw ...passed 00:11:55.911 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.911 Test: blockdev nvme admin passthru ...passed 00:11:55.911 Test: blockdev copy ...passed 00:11:55.911 Suite: bdevio tests on: nvme2n1 00:11:55.911 Test: blockdev write read block ...passed 00:11:55.911 Test: blockdev write zeroes read block ...passed 00:11:55.911 Test: blockdev write zeroes read no split ...passed 00:11:55.911 Test: blockdev write zeroes read split ...passed 00:11:55.911 Test: blockdev write zeroes read split partial ...passed 00:11:55.911 Test: blockdev reset ...passed 00:11:55.911 Test: blockdev write read 8 blocks ...passed 00:11:55.911 Test: blockdev write read size > 128k ...passed 00:11:55.911 Test: blockdev write read invalid size ...passed 00:11:55.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.911 Test: blockdev write read max offset ...passed 00:11:55.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.911 Test: blockdev writev readv 8 blocks ...passed 00:11:55.911 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.911 Test: blockdev writev readv block ...passed 00:11:55.911 Test: blockdev writev readv size > 128k ...passed 00:11:55.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.911 Test: blockdev comparev and writev ...passed 00:11:55.911 Test: blockdev nvme passthru rw ...passed 00:11:55.911 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.911 Test: blockdev nvme admin passthru ...passed 00:11:55.911 Test: blockdev copy ...passed 00:11:55.911 Suite: bdevio tests on: nvme1n3 00:11:55.911 Test: blockdev write read block ...passed 00:11:55.911 Test: blockdev write zeroes read block ...passed 00:11:55.911 Test: blockdev write zeroes read no split ...passed 00:11:55.911 Test: blockdev write zeroes read split ...passed 00:11:55.911 Test: blockdev write zeroes read split partial ...passed 00:11:55.911 Test: blockdev reset ...passed 00:11:55.911 Test: blockdev write read 8 blocks ...passed 00:11:55.911 Test: blockdev write read size > 128k ...passed 00:11:55.911 Test: blockdev write read invalid size ...passed 00:11:55.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.911 Test: blockdev write read max offset ...passed 00:11:55.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.911 Test: blockdev writev readv 8 blocks ...passed 00:11:55.911 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.911 Test: blockdev writev readv block ...passed 00:11:55.911 Test: blockdev writev readv size > 128k ...passed 00:11:55.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.911 Test: blockdev comparev and writev ...passed 00:11:55.911 Test: blockdev nvme passthru rw ...passed 00:11:55.911 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.911 Test: blockdev nvme admin passthru ...passed 00:11:55.911 Test: blockdev copy ...passed 00:11:55.911 Suite: bdevio tests on: nvme1n2 00:11:55.911 Test: blockdev write read block ...passed 00:11:55.911 Test: blockdev write zeroes read block ...passed 00:11:55.911 Test: blockdev write zeroes read no split ...passed 00:11:55.911 Test: blockdev write zeroes read split ...passed 00:11:55.911 Test: blockdev write zeroes read split partial ...passed 00:11:55.912 Test: blockdev reset ...passed 00:11:55.912 Test: blockdev write read 8 blocks ...passed 00:11:55.912 Test: blockdev write read size > 128k ...passed 00:11:55.912 Test: blockdev write read invalid size ...passed 00:11:55.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.912 Test: blockdev write read max offset ...passed 00:11:55.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.912 Test: blockdev writev readv 8 blocks ...passed 00:11:55.912 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.912 Test: blockdev writev readv block ...passed 00:11:55.912 Test: blockdev writev readv size > 128k ...passed 00:11:55.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.912 Test: blockdev comparev and writev ...passed 00:11:55.912 Test: blockdev nvme passthru rw ...passed 00:11:55.912 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.912 Test: blockdev nvme admin passthru ...passed 00:11:55.912 Test: blockdev copy ...passed 00:11:55.912 Suite: bdevio tests on: nvme1n1 00:11:55.912 Test: blockdev write read block ...passed 00:11:55.912 Test: blockdev write zeroes read block ...passed 00:11:55.912 Test: blockdev write zeroes read no split ...passed 00:11:55.912 Test: blockdev write zeroes read split ...passed 00:11:55.912 Test: blockdev write zeroes read split partial ...passed 00:11:55.912 Test: blockdev reset ...passed 00:11:55.912 Test: blockdev write read 8 blocks ...passed 00:11:55.912 Test: blockdev write read size > 128k ...passed 00:11:55.912 Test: blockdev write read invalid size ...passed 00:11:55.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.912 Test: blockdev write read max offset ...passed 00:11:55.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.912 Test: blockdev writev readv 8 blocks ...passed 00:11:55.912 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.912 Test: blockdev writev readv block ...passed 00:11:55.912 Test: blockdev writev readv size > 128k ...passed 00:11:55.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.912 Test: blockdev comparev and writev ...passed 00:11:55.912 Test: blockdev nvme passthru rw ...passed 00:11:55.912 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.912 Test: blockdev nvme admin passthru ...passed 00:11:55.912 Test: blockdev copy ...passed 00:11:55.912 Suite: bdevio tests on: nvme0n1 00:11:55.912 Test: blockdev write read block ...passed 00:11:55.912 Test: blockdev write zeroes read block ...passed 00:11:55.912 Test: blockdev write zeroes read no split ...passed 00:11:55.912 Test: blockdev write zeroes read split ...passed 00:11:55.912 Test: blockdev write zeroes read split partial ...passed 00:11:55.912 Test: blockdev reset ...passed 00:11:55.912 Test: blockdev write read 8 blocks ...passed 00:11:55.912 Test: blockdev write read size > 128k ...passed 00:11:55.912 Test: blockdev write read invalid size ...passed 00:11:55.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.912 Test: blockdev write read max offset ...passed 00:11:55.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.912 Test: blockdev writev readv 8 blocks ...passed 00:11:55.912 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.912 Test: blockdev writev readv block ...passed 00:11:55.912 Test: blockdev writev readv size > 128k ...passed 00:11:55.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.912 Test: blockdev comparev and writev ...passed 00:11:55.912 Test: blockdev nvme passthru rw ...passed 00:11:55.912 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.912 Test: blockdev nvme admin passthru ...passed 00:11:55.912 Test: blockdev copy ...passed 00:11:55.912 00:11:55.912 Run Summary: Type Total Ran Passed Failed Inactive 00:11:55.912 suites 6 6 n/a 0 0 00:11:55.912 tests 138 138 138 0 0 00:11:55.912 asserts 780 780 780 0 n/a 00:11:55.912 00:11:55.912 Elapsed time = 0.262 seconds 00:11:55.912 0 00:11:55.912 14:58:19 -- bdev/blockdev.sh@293 -- # killprocess 79219 00:11:55.912 14:58:19 -- common/autotest_common.sh@936 -- # '[' -z 79219 ']' 00:11:55.912 14:58:19 -- common/autotest_common.sh@940 -- # kill -0 79219 00:11:55.912 14:58:19 -- common/autotest_common.sh@941 -- # uname 00:11:55.912 14:58:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.912 14:58:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79219 00:11:55.912 killing process with pid 79219 00:11:55.912 14:58:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:55.912 14:58:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:55.912 14:58:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79219' 00:11:55.912 14:58:19 -- common/autotest_common.sh@955 -- # kill 79219 00:11:55.912 14:58:19 -- common/autotest_common.sh@960 -- # wait 79219 00:11:56.173 14:58:19 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:56.173 00:11:56.173 real 0m1.303s 00:11:56.173 user 0m3.232s 00:11:56.173 sys 0m0.273s 00:11:56.173 14:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.173 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:56.173 ************************************ 00:11:56.173 END TEST bdev_bounds 00:11:56.173 ************************************ 00:11:56.173 14:58:19 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:56.173 14:58:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:56.173 14:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.173 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:56.173 ************************************ 00:11:56.173 START TEST bdev_nbd 00:11:56.173 ************************************ 00:11:56.173 14:58:19 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:56.173 14:58:19 -- bdev/blockdev.sh@298 -- # uname -s 00:11:56.173 14:58:19 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:56.173 14:58:19 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.173 14:58:19 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:56.173 14:58:19 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:56.173 14:58:19 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:56.173 14:58:19 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:56.173 14:58:19 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:56.173 14:58:19 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:56.173 14:58:19 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:56.173 14:58:19 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:56.173 14:58:19 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:56.173 14:58:19 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:56.173 14:58:19 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:56.173 14:58:19 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:56.173 14:58:19 -- bdev/blockdev.sh@316 -- # nbd_pid=79263 00:11:56.173 14:58:19 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:56.173 14:58:19 -- bdev/blockdev.sh@318 -- # waitforlisten 79263 /var/tmp/spdk-nbd.sock 00:11:56.173 14:58:19 -- common/autotest_common.sh@829 -- # '[' -z 79263 ']' 00:11:56.173 14:58:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:56.173 14:58:19 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:56.173 14:58:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:56.173 14:58:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:56.173 14:58:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.173 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:11:56.173 [2024-11-18 14:58:19.736753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:56.173 [2024-11-18 14:58:19.736871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.432 [2024-11-18 14:58:19.885791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.432 [2024-11-18 14:58:19.925379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.003 14:58:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.003 14:58:20 -- common/autotest_common.sh@862 -- # return 0 00:11:57.003 14:58:20 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@24 -- # local i 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:57.003 14:58:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:57.264 14:58:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:57.264 14:58:20 -- common/autotest_common.sh@867 -- # local i 00:11:57.264 14:58:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.264 14:58:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.264 14:58:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:57.264 14:58:20 -- common/autotest_common.sh@871 -- # break 00:11:57.264 14:58:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.264 14:58:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.264 14:58:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.264 1+0 records in 00:11:57.264 1+0 records out 00:11:57.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543872 s, 7.5 MB/s 00:11:57.264 14:58:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.264 14:58:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.264 14:58:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.264 14:58:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.264 14:58:20 -- common/autotest_common.sh@887 -- # return 0 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:57.264 14:58:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:57.565 14:58:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:57.565 14:58:20 -- common/autotest_common.sh@867 -- # local i 00:11:57.565 14:58:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.565 14:58:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.565 14:58:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:57.565 14:58:20 -- common/autotest_common.sh@871 -- # break 00:11:57.565 14:58:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.565 14:58:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.565 14:58:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.565 1+0 records in 00:11:57.565 1+0 records out 00:11:57.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399043 s, 10.3 MB/s 00:11:57.565 14:58:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.565 14:58:20 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.565 14:58:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.565 14:58:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.565 14:58:20 -- common/autotest_common.sh@887 -- # return 0 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:57.565 14:58:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:57.840 14:58:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:57.840 14:58:21 -- common/autotest_common.sh@867 -- # local i 00:11:57.840 14:58:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.840 14:58:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.840 14:58:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:57.840 14:58:21 -- common/autotest_common.sh@871 -- # break 00:11:57.840 14:58:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.840 14:58:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.840 14:58:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.840 1+0 records in 00:11:57.840 1+0 records out 00:11:57.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102563 s, 4.0 MB/s 00:11:57.840 14:58:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.840 14:58:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.840 14:58:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.840 14:58:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.840 14:58:21 -- common/autotest_common.sh@887 -- # return 0 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:57.840 14:58:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:57.841 14:58:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:57.841 14:58:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:57.841 14:58:21 -- common/autotest_common.sh@867 -- # local i 00:11:57.841 14:58:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.841 14:58:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.841 14:58:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:57.841 14:58:21 -- common/autotest_common.sh@871 -- # break 00:11:57.841 14:58:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.841 14:58:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.841 14:58:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.841 1+0 records in 00:11:57.841 1+0 records out 00:11:57.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775738 s, 5.3 MB/s 00:11:57.841 14:58:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.841 14:58:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.841 14:58:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.841 14:58:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.841 14:58:21 -- common/autotest_common.sh@887 -- # return 0 00:11:57.841 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.841 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:57.841 14:58:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:58.102 14:58:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:58.102 14:58:21 -- common/autotest_common.sh@867 -- # local i 00:11:58.102 14:58:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.102 14:58:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.102 14:58:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:58.102 14:58:21 -- common/autotest_common.sh@871 -- # break 00:11:58.102 14:58:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.102 14:58:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.102 14:58:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.102 1+0 records in 00:11:58.102 1+0 records out 00:11:58.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443634 s, 9.2 MB/s 00:11:58.102 14:58:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.102 14:58:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:58.102 14:58:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.102 14:58:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.102 14:58:21 -- common/autotest_common.sh@887 -- # return 0 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:58.102 14:58:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:58.364 14:58:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:58.364 14:58:21 -- common/autotest_common.sh@867 -- # local i 00:11:58.364 14:58:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:58.364 14:58:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:58.364 14:58:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:58.364 14:58:21 -- common/autotest_common.sh@871 -- # break 00:11:58.364 14:58:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:58.364 14:58:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:58.364 14:58:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.364 1+0 records in 00:11:58.364 1+0 records out 00:11:58.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007497 s, 5.5 MB/s 00:11:58.364 14:58:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.364 14:58:21 -- common/autotest_common.sh@884 -- # size=4096 00:11:58.364 14:58:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.364 14:58:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:58.364 14:58:21 -- common/autotest_common.sh@887 -- # return 0 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:58.364 14:58:21 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd0", 00:11:58.625 "bdev_name": "nvme0n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd1", 00:11:58.625 "bdev_name": "nvme1n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd2", 00:11:58.625 "bdev_name": "nvme1n2" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd3", 00:11:58.625 "bdev_name": "nvme1n3" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd4", 00:11:58.625 "bdev_name": "nvme2n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd5", 00:11:58.625 "bdev_name": "nvme3n1" 00:11:58.625 } 00:11:58.625 ]' 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd0", 00:11:58.625 "bdev_name": "nvme0n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd1", 00:11:58.625 "bdev_name": "nvme1n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd2", 00:11:58.625 "bdev_name": "nvme1n2" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd3", 00:11:58.625 "bdev_name": "nvme1n3" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd4", 00:11:58.625 "bdev_name": "nvme2n1" 00:11:58.625 }, 00:11:58.625 { 00:11:58.625 "nbd_device": "/dev/nbd5", 00:11:58.625 "bdev_name": "nvme3n1" 00:11:58.625 } 00:11:58.625 ]' 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@51 -- # local i 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.625 14:58:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@41 -- # break 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.887 14:58:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@41 -- # break 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@41 -- # break 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.146 14:58:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:59.404 14:58:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:59.404 14:58:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:59.404 14:58:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:59.404 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@41 -- # break 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.405 14:58:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@41 -- # break 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.664 14:58:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@41 -- # break 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.924 14:58:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@65 -- # true 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@65 -- # count=0 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@122 -- # count=0 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@127 -- # return 0 00:12:00.184 14:58:23 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@12 -- # local i 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:00.184 14:58:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:00.184 /dev/nbd0 00:12:00.185 14:58:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:00.185 14:58:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:00.185 14:58:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:00.185 14:58:23 -- common/autotest_common.sh@867 -- # local i 00:12:00.185 14:58:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:00.185 14:58:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:00.185 14:58:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:00.185 14:58:23 -- common/autotest_common.sh@871 -- # break 00:12:00.185 14:58:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:00.185 14:58:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:00.185 14:58:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.185 1+0 records in 00:12:00.185 1+0 records out 00:12:00.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778982 s, 5.3 MB/s 00:12:00.185 14:58:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.446 14:58:23 -- common/autotest_common.sh@884 -- # size=4096 00:12:00.446 14:58:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.446 14:58:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:00.446 14:58:23 -- common/autotest_common.sh@887 -- # return 0 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:00.446 /dev/nbd1 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:00.446 14:58:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:00.446 14:58:23 -- common/autotest_common.sh@867 -- # local i 00:12:00.446 14:58:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:00.446 14:58:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:00.446 14:58:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:00.446 14:58:23 -- common/autotest_common.sh@871 -- # break 00:12:00.446 14:58:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:00.446 14:58:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:00.446 14:58:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.446 1+0 records in 00:12:00.446 1+0 records out 00:12:00.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054964 s, 7.5 MB/s 00:12:00.446 14:58:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.446 14:58:23 -- common/autotest_common.sh@884 -- # size=4096 00:12:00.446 14:58:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.446 14:58:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:00.446 14:58:23 -- common/autotest_common.sh@887 -- # return 0 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:00.446 14:58:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:00.707 /dev/nbd10 00:12:00.707 14:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:00.707 14:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:00.707 14:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:00.707 14:58:24 -- common/autotest_common.sh@867 -- # local i 00:12:00.707 14:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:00.707 14:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:00.707 14:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:00.707 14:58:24 -- common/autotest_common.sh@871 -- # break 00:12:00.707 14:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:00.707 14:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:00.707 14:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.707 1+0 records in 00:12:00.707 1+0 records out 00:12:00.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510062 s, 8.0 MB/s 00:12:00.707 14:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.707 14:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:12:00.707 14:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.707 14:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:00.707 14:58:24 -- common/autotest_common.sh@887 -- # return 0 00:12:00.707 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.707 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:00.707 14:58:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:00.967 /dev/nbd11 00:12:00.967 14:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:00.967 14:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:00.967 14:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:00.967 14:58:24 -- common/autotest_common.sh@867 -- # local i 00:12:00.968 14:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:00.968 14:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:00.968 14:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:00.968 14:58:24 -- common/autotest_common.sh@871 -- # break 00:12:00.968 14:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:00.968 14:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:00.968 14:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.968 1+0 records in 00:12:00.968 1+0 records out 00:12:00.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474617 s, 8.6 MB/s 00:12:00.968 14:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.968 14:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:12:00.968 14:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.968 14:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:00.968 14:58:24 -- common/autotest_common.sh@887 -- # return 0 00:12:00.968 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:00.968 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:00.968 14:58:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:01.229 /dev/nbd12 00:12:01.229 14:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:01.229 14:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:01.229 14:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:01.229 14:58:24 -- common/autotest_common.sh@867 -- # local i 00:12:01.229 14:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:01.229 14:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:01.229 14:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:01.229 14:58:24 -- common/autotest_common.sh@871 -- # break 00:12:01.229 14:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:01.229 14:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:01.229 14:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.229 1+0 records in 00:12:01.229 1+0 records out 00:12:01.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727772 s, 5.6 MB/s 00:12:01.229 14:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.229 14:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:12:01.229 14:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.229 14:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:01.229 14:58:24 -- common/autotest_common.sh@887 -- # return 0 00:12:01.229 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.229 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:01.229 14:58:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:01.229 /dev/nbd13 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:01.490 14:58:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:01.490 14:58:24 -- common/autotest_common.sh@867 -- # local i 00:12:01.490 14:58:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:01.490 14:58:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:01.490 14:58:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:01.490 14:58:24 -- common/autotest_common.sh@871 -- # break 00:12:01.490 14:58:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:01.490 14:58:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:01.490 14:58:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.490 1+0 records in 00:12:01.490 1+0 records out 00:12:01.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107921 s, 3.8 MB/s 00:12:01.490 14:58:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.490 14:58:24 -- common/autotest_common.sh@884 -- # size=4096 00:12:01.490 14:58:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.490 14:58:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:01.490 14:58:24 -- common/autotest_common.sh@887 -- # return 0 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.490 14:58:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:01.490 14:58:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:01.490 { 00:12:01.490 "nbd_device": "/dev/nbd0", 00:12:01.491 "bdev_name": "nvme0n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd1", 00:12:01.491 "bdev_name": "nvme1n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd10", 00:12:01.491 "bdev_name": "nvme1n2" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd11", 00:12:01.491 "bdev_name": "nvme1n3" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd12", 00:12:01.491 "bdev_name": "nvme2n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd13", 00:12:01.491 "bdev_name": "nvme3n1" 00:12:01.491 } 00:12:01.491 ]' 00:12:01.491 14:58:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd0", 00:12:01.491 "bdev_name": "nvme0n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd1", 00:12:01.491 "bdev_name": "nvme1n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd10", 00:12:01.491 "bdev_name": "nvme1n2" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd11", 00:12:01.491 "bdev_name": "nvme1n3" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd12", 00:12:01.491 "bdev_name": "nvme2n1" 00:12:01.491 }, 00:12:01.491 { 00:12:01.491 "nbd_device": "/dev/nbd13", 00:12:01.491 "bdev_name": "nvme3n1" 00:12:01.491 } 00:12:01.491 ]' 00:12:01.491 14:58:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:01.491 14:58:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:01.491 /dev/nbd1 00:12:01.491 /dev/nbd10 00:12:01.491 /dev/nbd11 00:12:01.491 /dev/nbd12 00:12:01.491 /dev/nbd13' 00:12:01.491 14:58:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:01.491 /dev/nbd1 00:12:01.491 /dev/nbd10 00:12:01.491 /dev/nbd11 00:12:01.491 /dev/nbd12 00:12:01.491 /dev/nbd13' 00:12:01.491 14:58:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@65 -- # count=6 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@95 -- # count=6 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:01.752 256+0 records in 00:12:01.752 256+0 records out 00:12:01.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985902 s, 106 MB/s 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:01.752 256+0 records in 00:12:01.752 256+0 records out 00:12:01.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0673795 s, 15.6 MB/s 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:01.752 256+0 records in 00:12:01.752 256+0 records out 00:12:01.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162763 s, 6.4 MB/s 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:01.752 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:02.010 256+0 records in 00:12:02.010 256+0 records out 00:12:02.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787983 s, 13.3 MB/s 00:12:02.010 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:02.011 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:02.011 256+0 records in 00:12:02.011 256+0 records out 00:12:02.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0560245 s, 18.7 MB/s 00:12:02.011 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:02.011 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:02.011 256+0 records in 00:12:02.011 256+0 records out 00:12:02.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0724871 s, 14.5 MB/s 00:12:02.011 14:58:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:02.011 14:58:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:02.269 256+0 records in 00:12:02.269 256+0 records out 00:12:02.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0620133 s, 16.9 MB/s 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@51 -- # local i 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@41 -- # break 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.269 14:58:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@41 -- # break 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.528 14:58:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@41 -- # break 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.786 14:58:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@41 -- # break 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.044 14:58:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:03.302 14:58:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@41 -- # break 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@41 -- # break 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.303 14:58:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@65 -- # true 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@65 -- # count=0 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@104 -- # count=0 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@109 -- # return 0 00:12:03.561 14:58:27 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:03.561 14:58:27 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:03.819 malloc_lvol_verify 00:12:03.819 14:58:27 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:04.078 a7976b2f-2f3a-44ff-8d6c-1f1d68a2af23 00:12:04.078 14:58:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:04.337 540ed7e5-ce3f-4f30-885d-7c8920d29be7 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:04.337 /dev/nbd0 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:04.337 mke2fs 1.47.0 (5-Feb-2023) 00:12:04.337 Discarding device blocks: 0/4096 done 00:12:04.337 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:04.337 00:12:04.337 Allocating group tables: 0/1 done 00:12:04.337 Writing inode tables: 0/1 done 00:12:04.337 Creating journal (1024 blocks): done 00:12:04.337 Writing superblocks and filesystem accounting information: 0/1 done 00:12:04.337 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@51 -- # local i 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.337 14:58:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@41 -- # break 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:04.596 14:58:28 -- bdev/nbd_common.sh@147 -- # return 0 00:12:04.596 14:58:28 -- bdev/blockdev.sh@324 -- # killprocess 79263 00:12:04.596 14:58:28 -- common/autotest_common.sh@936 -- # '[' -z 79263 ']' 00:12:04.596 14:58:28 -- common/autotest_common.sh@940 -- # kill -0 79263 00:12:04.596 14:58:28 -- common/autotest_common.sh@941 -- # uname 00:12:04.596 14:58:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.596 14:58:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79263 00:12:04.596 killing process with pid 79263 00:12:04.596 14:58:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:04.596 14:58:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:04.596 14:58:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79263' 00:12:04.596 14:58:28 -- common/autotest_common.sh@955 -- # kill 79263 00:12:04.596 14:58:28 -- common/autotest_common.sh@960 -- # wait 79263 00:12:04.856 14:58:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:04.856 00:12:04.856 real 0m8.665s 00:12:04.856 user 0m12.417s 00:12:04.856 sys 0m3.055s 00:12:04.856 14:58:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.856 ************************************ 00:12:04.856 END TEST bdev_nbd 00:12:04.856 ************************************ 00:12:04.856 14:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:04.856 14:58:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:04.856 14:58:28 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.856 14:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:04.856 ************************************ 00:12:04.856 START TEST bdev_fio 00:12:04.856 ************************************ 00:12:04.856 14:58:28 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@329 -- # local env_context 00:12:04.856 14:58:28 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:04.856 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:04.856 14:58:28 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:04.856 14:58:28 -- bdev/blockdev.sh@337 -- # echo '' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:04.856 14:58:28 -- bdev/blockdev.sh@337 -- # env_context= 00:12:04.856 14:58:28 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.856 14:58:28 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:04.856 14:58:28 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:04.856 14:58:28 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:04.856 14:58:28 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:04.856 14:58:28 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.856 14:58:28 -- common/autotest_common.sh@1290 -- # cat 00:12:04.856 14:58:28 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1303 -- # cat 00:12:04.856 14:58:28 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:04.856 14:58:28 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:04.856 14:58:28 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:04.856 14:58:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:04.856 14:58:28 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:04.856 14:58:28 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:04.856 14:58:28 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:04.856 14:58:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:04.856 14:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.856 14:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:05.116 ************************************ 00:12:05.116 START TEST bdev_fio_rw_verify 00:12:05.116 ************************************ 00:12:05.116 14:58:28 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.116 14:58:28 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.116 14:58:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:05.116 14:58:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:05.116 14:58:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:05.116 14:58:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:05.116 14:58:28 -- common/autotest_common.sh@1330 -- # shift 00:12:05.116 14:58:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:05.116 14:58:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:05.116 14:58:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:05.116 14:58:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:05.116 14:58:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:05.116 14:58:28 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:05.116 14:58:28 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:05.116 14:58:28 -- common/autotest_common.sh@1336 -- # break 00:12:05.116 14:58:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:05.116 14:58:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.116 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.116 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.116 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.117 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.117 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.117 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.117 fio-3.35 00:12:05.117 Starting 6 threads 00:12:17.383 00:12:17.383 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=79646: Mon Nov 18 14:58:39 2024 00:12:17.383 read: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(924MiB/10001msec) 00:12:17.383 slat (usec): min=2, max=1212, avg= 5.16, stdev= 6.80 00:12:17.383 clat (usec): min=80, max=5755, avg=782.66, stdev=548.61 00:12:17.383 lat (usec): min=87, max=5769, avg=787.82, stdev=548.99 00:12:17.383 clat percentiles (usec): 00:12:17.383 | 50.000th=[ 652], 99.000th=[ 2540], 99.900th=[ 3654], 99.990th=[ 4948], 00:12:17.383 | 99.999th=[ 5735] 00:12:17.383 write: IOPS=24.0k, BW=93.7MiB/s (98.2MB/s)(937MiB/10001msec); 0 zone resets 00:12:17.383 slat (usec): min=12, max=5736, avg=29.91, stdev=91.28 00:12:17.383 clat (usec): min=74, max=18058, avg=973.03, stdev=614.70 00:12:17.383 lat (usec): min=97, max=18088, avg=1002.94, stdev=622.72 00:12:17.383 clat percentiles (usec): 00:12:17.383 | 50.000th=[ 840], 99.000th=[ 2900], 99.900th=[ 4080], 99.990th=[ 6652], 00:12:17.383 | 99.999th=[17957] 00:12:17.383 bw ( KiB/s): min=58708, max=137576, per=100.00%, avg=97022.26, stdev=3544.24, samples=114 00:12:17.383 iops : min=14677, max=34394, avg=24255.11, stdev=886.05, samples=114 00:12:17.383 lat (usec) : 100=0.01%, 250=9.09%, 500=21.54%, 750=19.49%, 1000=15.94% 00:12:17.383 lat (msec) : 2=28.98%, 4=4.85%, 10=0.09%, 20=0.01% 00:12:17.383 cpu : usr=47.02%, sys=32.99%, ctx=7712, majf=0, minf=22846 00:12:17.383 IO depths : 1=12.1%, 2=24.7%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:17.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.383 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.383 issued rwts: total=236437,239889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.383 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:17.383 00:12:17.383 Run status group 0 (all jobs): 00:12:17.383 READ: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=924MiB (968MB), run=10001-10001msec 00:12:17.383 WRITE: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=937MiB (983MB), run=10001-10001msec 00:12:17.383 ----------------------------------------------------- 00:12:17.383 Suppressions used: 00:12:17.383 count bytes template 00:12:17.383 6 48 /usr/src/fio/parse.c 00:12:17.383 3292 316032 /usr/src/fio/iolog.c 00:12:17.383 1 8 libtcmalloc_minimal.so 00:12:17.383 1 904 libcrypto.so 00:12:17.383 ----------------------------------------------------- 00:12:17.384 00:12:17.384 00:12:17.384 real 0m11.012s 00:12:17.384 user 0m28.916s 00:12:17.384 sys 0m20.109s 00:12:17.384 14:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:17.384 ************************************ 00:12:17.384 END TEST bdev_fio_rw_verify 00:12:17.384 ************************************ 00:12:17.384 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.384 14:58:39 -- bdev/blockdev.sh@348 -- # rm -f 00:12:17.384 14:58:39 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.384 14:58:39 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.384 14:58:39 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:17.384 14:58:39 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:17.384 14:58:39 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:17.384 14:58:39 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:17.384 14:58:39 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.384 14:58:39 -- common/autotest_common.sh@1290 -- # cat 00:12:17.384 14:58:39 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:17.384 14:58:39 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:17.384 14:58:39 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "25114c04-a474-4a29-901f-3ac5e596f7eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "25114c04-a474-4a29-901f-3ac5e596f7eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "adc4d5d1-a8de-4d24-a07e-82af547f46bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "adc4d5d1-a8de-4d24-a07e-82af547f46bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "4bd8a14d-c09d-474b-88e9-fd9af3e49c8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bd8a14d-c09d-474b-88e9-fd9af3e49c8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d9a43a8f-9008-46f4-b0d9-09704efb4b0c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9a43a8f-9008-46f4-b0d9-09704efb4b0c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "be98427f-4c93-4b62-8247-34fab359e4c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "be98427f-4c93-4b62-8247-34fab359e4c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "affb863b-d9e0-4710-932c-809aa4862844"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "affb863b-d9e0-4710-932c-809aa4862844",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:17.384 14:58:39 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:17.384 14:58:39 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.384 /home/vagrant/spdk_repo/spdk 00:12:17.384 14:58:39 -- bdev/blockdev.sh@360 -- # popd 00:12:17.384 14:58:39 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:17.384 14:58:39 -- bdev/blockdev.sh@362 -- # return 0 00:12:17.384 00:12:17.384 real 0m11.165s 00:12:17.384 user 0m28.982s 00:12:17.384 sys 0m20.187s 00:12:17.384 14:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:17.384 ************************************ 00:12:17.384 END TEST bdev_fio 00:12:17.384 ************************************ 00:12:17.384 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.384 14:58:39 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:17.384 14:58:39 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:17.384 14:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.384 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.384 ************************************ 00:12:17.384 START TEST bdev_verify 00:12:17.384 ************************************ 00:12:17.384 14:58:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:17.384 [2024-11-18 14:58:39.666466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.384 [2024-11-18 14:58:39.666599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79819 ] 00:12:17.384 [2024-11-18 14:58:39.815971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:17.384 [2024-11-18 14:58:39.858147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.384 [2024-11-18 14:58:39.858171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.384 Running I/O for 5 seconds... 00:12:22.653 00:12:22.653 Latency(us) 00:12:22.653 [2024-11-18T14:58:46.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x0 length 0x20000 00:12:22.653 nvme0n1 : 5.06 3255.60 12.72 0.00 0.00 39167.80 16031.11 56058.49 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x20000 length 0x20000 00:12:22.653 nvme0n1 : 5.07 3130.22 12.23 0.00 0.00 40734.95 3730.51 54445.29 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x0 length 0x80000 00:12:22.653 nvme1n1 : 5.06 3280.35 12.81 0.00 0.00 38880.55 2785.28 53638.70 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x80000 length 0x80000 00:12:22.653 nvme1n1 : 5.07 3099.31 12.11 0.00 0.00 41122.16 3402.83 55251.89 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x0 length 0x80000 00:12:22.653 nvme1n2 : 5.06 3237.78 12.65 0.00 0.00 39306.58 3175.98 51622.20 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x80000 length 0x80000 00:12:22.653 nvme1n2 : 5.07 3060.40 11.95 0.00 0.00 41548.60 5721.80 57671.68 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x0 length 0x80000 00:12:22.653 nvme1n3 : 5.06 3203.72 12.51 0.00 0.00 39711.81 4461.49 56865.08 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x80000 length 0x80000 00:12:22.653 nvme1n3 : 5.07 3095.81 12.09 0.00 0.00 41021.20 3163.37 63721.16 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0x0 length 0xbd0bd 00:12:22.653 nvme2n1 : 5.06 3202.59 12.51 0.00 0.00 39664.67 6654.42 54041.99 00:12:22.653 [2024-11-18T14:58:46.243Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.653 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:22.654 nvme2n1 : 5.07 3074.76 12.01 0.00 0.00 41231.64 7007.31 56865.08 00:12:22.654 [2024-11-18T14:58:46.244Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:22.654 Verification LBA range: start 0x0 length 0xa0000 00:12:22.654 nvme3n1 : 5.07 3318.01 12.96 0.00 0.00 38208.88 2772.68 42144.69 00:12:22.654 [2024-11-18T14:58:46.244Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:22.654 Verification LBA range: start 0xa0000 length 0xa0000 00:12:22.654 nvme3n1 : 5.08 3114.21 12.16 0.00 0.00 40611.76 3276.80 55251.89 00:12:22.654 [2024-11-18T14:58:46.244Z] =================================================================================================================== 00:12:22.654 [2024-11-18T14:58:46.244Z] Total : 38072.77 148.72 0.00 0.00 40074.45 2772.68 63721.16 00:12:22.654 00:12:22.654 real 0m5.742s 00:12:22.654 user 0m7.417s 00:12:22.654 sys 0m3.069s 00:12:22.654 14:58:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.654 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.654 ************************************ 00:12:22.654 END TEST bdev_verify 00:12:22.654 ************************************ 00:12:22.654 14:58:45 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:22.654 14:58:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:22.654 14:58:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.654 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.654 ************************************ 00:12:22.654 START TEST bdev_verify_big_io 00:12:22.654 ************************************ 00:12:22.654 14:58:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:22.654 [2024-11-18 14:58:45.451450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:22.654 [2024-11-18 14:58:45.451563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79907 ] 00:12:22.654 [2024-11-18 14:58:45.602988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:22.654 [2024-11-18 14:58:45.644265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.654 [2024-11-18 14:58:45.644302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.654 Running I/O for 5 seconds... 00:12:29.238 00:12:29.238 Latency(us) 00:12:29.238 [2024-11-18T14:58:52.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0x2000 00:12:29.238 nvme0n1 : 5.77 210.26 13.14 0.00 0.00 584192.77 64124.46 709805.29 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x2000 length 0x2000 00:12:29.238 nvme0n1 : 5.78 195.29 12.21 0.00 0.00 632558.46 59688.17 748521.94 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0x8000 00:12:29.238 nvme1n1 : 5.78 182.34 11.40 0.00 0.00 671171.10 66140.95 690446.97 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x8000 length 0x8000 00:12:29.238 nvme1n1 : 5.78 195.21 12.20 0.00 0.00 616162.53 75013.51 796917.76 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0x8000 00:12:29.238 nvme1n2 : 5.79 182.30 11.39 0.00 0.00 665488.96 10788.23 832408.02 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x8000 length 0x8000 00:12:29.238 nvme1n2 : 5.80 165.82 10.36 0.00 0.00 713885.86 78643.20 716258.07 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0x8000 00:12:29.238 nvme1n3 : 5.79 196.41 12.28 0.00 0.00 606830.15 10939.47 748521.94 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x8000 length 0x8000 00:12:29.238 nvme1n3 : 5.80 239.70 14.98 0.00 0.00 486048.72 72997.02 600108.11 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0xbd0b 00:12:29.238 nvme2n1 : 5.79 228.13 14.26 0.00 0.00 505383.54 11292.36 567844.23 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:29.238 nvme2n1 : 5.81 223.25 13.95 0.00 0.00 508213.43 22483.89 677541.42 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0x0 length 0xa000 00:12:29.238 nvme3n1 : 5.79 223.90 13.99 0.00 0.00 503663.24 8418.86 700126.13 00:12:29.238 [2024-11-18T14:58:52.828Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:29.238 Verification LBA range: start 0xa000 length 0xa000 00:12:29.238 nvme3n1 : 5.81 223.15 13.95 0.00 0.00 494987.99 4058.19 735616.39 00:12:29.238 [2024-11-18T14:58:52.828Z] =================================================================================================================== 00:12:29.238 [2024-11-18T14:58:52.828Z] Total : 2465.77 154.11 0.00 0.00 574343.92 4058.19 832408.02 00:12:29.238 00:12:29.238 real 0m6.520s 00:12:29.238 user 0m11.903s 00:12:29.239 sys 0m0.513s 00:12:29.239 14:58:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.239 ************************************ 00:12:29.239 END TEST bdev_verify_big_io 00:12:29.239 ************************************ 00:12:29.239 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:29.239 14:58:51 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:29.239 14:58:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:29.239 14:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.239 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:29.239 ************************************ 00:12:29.239 START TEST bdev_write_zeroes 00:12:29.239 ************************************ 00:12:29.239 14:58:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:29.239 [2024-11-18 14:58:52.034205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:29.239 [2024-11-18 14:58:52.035305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80006 ] 00:12:29.239 [2024-11-18 14:58:52.190163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.239 [2024-11-18 14:58:52.230313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.239 Running I/O for 1 seconds... 00:12:30.183 00:12:30.183 Latency(us) 00:12:30.183 [2024-11-18T14:58:53.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.183 [2024-11-18T14:58:53.773Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.183 nvme0n1 : 1.01 12015.17 46.93 0.00 0.00 10643.95 8015.56 20265.75 00:12:30.183 [2024-11-18T14:58:53.773Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.183 nvme1n1 : 1.02 11945.15 46.66 0.00 0.00 10687.13 7965.14 22988.01 00:12:30.183 [2024-11-18T14:58:53.773Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.184 nvme1n2 : 1.02 11929.31 46.60 0.00 0.00 10678.39 7965.14 22080.59 00:12:30.184 [2024-11-18T14:58:53.774Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.184 nvme1n3 : 1.02 11913.47 46.54 0.00 0.00 10669.15 7965.14 21173.17 00:12:30.184 [2024-11-18T14:58:53.774Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.184 nvme2n1 : 1.03 15608.86 60.97 0.00 0.00 8123.63 5167.26 17241.01 00:12:30.184 [2024-11-18T14:58:53.774Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:30.184 nvme3n1 : 1.03 11847.04 46.28 0.00 0.00 10626.06 4133.81 24399.56 00:12:30.184 [2024-11-18T14:58:53.774Z] =================================================================================================================== 00:12:30.184 [2024-11-18T14:58:53.774Z] Total : 75259.00 293.98 0.00 0.00 10132.33 4133.81 24399.56 00:12:30.184 00:12:30.184 real 0m1.682s 00:12:30.184 user 0m1.014s 00:12:30.184 sys 0m0.506s 00:12:30.184 14:58:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.184 ************************************ 00:12:30.184 END TEST bdev_write_zeroes 00:12:30.184 ************************************ 00:12:30.184 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:12:30.184 14:58:53 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:30.184 14:58:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:30.184 14:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.184 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:12:30.184 ************************************ 00:12:30.184 START TEST bdev_json_nonenclosed 00:12:30.184 ************************************ 00:12:30.184 14:58:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:30.184 [2024-11-18 14:58:53.770032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:30.184 [2024-11-18 14:58:53.770152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80045 ] 00:12:30.444 [2024-11-18 14:58:53.919287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.444 [2024-11-18 14:58:53.959756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.444 [2024-11-18 14:58:53.959916] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:30.444 [2024-11-18 14:58:53.959937] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:30.704 00:12:30.704 real 0m0.336s 00:12:30.704 user 0m0.135s 00:12:30.704 sys 0m0.097s 00:12:30.704 14:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.704 ************************************ 00:12:30.704 END TEST bdev_json_nonenclosed 00:12:30.704 ************************************ 00:12:30.704 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 14:58:54 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:30.705 14:58:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:30.705 14:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.705 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 ************************************ 00:12:30.705 START TEST bdev_json_nonarray 00:12:30.705 ************************************ 00:12:30.705 14:58:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:30.705 [2024-11-18 14:58:54.168584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:30.705 [2024-11-18 14:58:54.168698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80066 ] 00:12:30.966 [2024-11-18 14:58:54.316203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.966 [2024-11-18 14:58:54.356528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.966 [2024-11-18 14:58:54.356700] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:30.966 [2024-11-18 14:58:54.356720] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:30.966 00:12:30.966 real 0m0.336s 00:12:30.966 user 0m0.135s 00:12:30.966 sys 0m0.097s 00:12:30.966 14:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.966 ************************************ 00:12:30.966 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.966 END TEST bdev_json_nonarray 00:12:30.966 ************************************ 00:12:30.966 14:58:54 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:30.966 14:58:54 -- bdev/blockdev.sh@809 -- # cleanup 00:12:30.966 14:58:54 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:30.966 14:58:54 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:30.966 14:58:54 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:30.966 14:58:54 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:58.474 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.474 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.474 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.474 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.474 00:12:58.474 real 1m11.388s 00:12:58.474 user 1m15.722s 00:12:58.474 sys 1m34.713s 00:12:58.474 ************************************ 00:12:58.474 END TEST blockdev_xnvme 00:12:58.474 ************************************ 00:12:58.474 14:59:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.474 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:58.474 14:59:21 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:58.474 14:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.474 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:58.474 ************************************ 00:12:58.474 START TEST ublk 00:12:58.474 ************************************ 00:12:58.474 14:59:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:58.474 * Looking for test storage... 00:12:58.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:58.474 14:59:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.474 14:59:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.474 14:59:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:58.474 14:59:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:58.474 14:59:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:58.474 14:59:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:58.474 14:59:21 -- scripts/common.sh@335 -- # IFS=.-: 00:12:58.474 14:59:21 -- scripts/common.sh@335 -- # read -ra ver1 00:12:58.474 14:59:21 -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.474 14:59:21 -- scripts/common.sh@336 -- # read -ra ver2 00:12:58.474 14:59:21 -- scripts/common.sh@337 -- # local 'op=<' 00:12:58.474 14:59:21 -- scripts/common.sh@339 -- # ver1_l=2 00:12:58.474 14:59:21 -- scripts/common.sh@340 -- # ver2_l=1 00:12:58.474 14:59:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:58.474 14:59:21 -- scripts/common.sh@343 -- # case "$op" in 00:12:58.474 14:59:21 -- scripts/common.sh@344 -- # : 1 00:12:58.474 14:59:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:58.474 14:59:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.474 14:59:21 -- scripts/common.sh@364 -- # decimal 1 00:12:58.474 14:59:21 -- scripts/common.sh@352 -- # local d=1 00:12:58.474 14:59:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.474 14:59:21 -- scripts/common.sh@354 -- # echo 1 00:12:58.474 14:59:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:58.474 14:59:21 -- scripts/common.sh@365 -- # decimal 2 00:12:58.474 14:59:21 -- scripts/common.sh@352 -- # local d=2 00:12:58.474 14:59:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.474 14:59:21 -- scripts/common.sh@354 -- # echo 2 00:12:58.474 14:59:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:58.474 14:59:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:58.474 14:59:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:58.474 14:59:21 -- scripts/common.sh@367 -- # return 0 00:12:58.474 14:59:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.474 --rc genhtml_branch_coverage=1 00:12:58.474 --rc genhtml_function_coverage=1 00:12:58.474 --rc genhtml_legend=1 00:12:58.474 --rc geninfo_all_blocks=1 00:12:58.474 --rc geninfo_unexecuted_blocks=1 00:12:58.474 00:12:58.474 ' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.474 --rc genhtml_branch_coverage=1 00:12:58.474 --rc genhtml_function_coverage=1 00:12:58.474 --rc genhtml_legend=1 00:12:58.474 --rc geninfo_all_blocks=1 00:12:58.474 --rc geninfo_unexecuted_blocks=1 00:12:58.474 00:12:58.474 ' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.474 --rc genhtml_branch_coverage=1 00:12:58.474 --rc genhtml_function_coverage=1 00:12:58.474 --rc genhtml_legend=1 00:12:58.474 --rc geninfo_all_blocks=1 00:12:58.474 --rc geninfo_unexecuted_blocks=1 00:12:58.474 00:12:58.474 ' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.474 --rc genhtml_branch_coverage=1 00:12:58.474 --rc genhtml_function_coverage=1 00:12:58.474 --rc genhtml_legend=1 00:12:58.474 --rc geninfo_all_blocks=1 00:12:58.474 --rc geninfo_unexecuted_blocks=1 00:12:58.474 00:12:58.474 ' 00:12:58.474 14:59:21 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:58.474 14:59:21 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:58.474 14:59:21 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:58.474 14:59:21 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:58.474 14:59:21 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:58.474 14:59:21 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:58.474 14:59:21 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:58.474 14:59:21 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:58.474 14:59:21 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:58.474 14:59:21 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:58.474 14:59:21 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:58.474 14:59:21 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:58.474 14:59:21 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:58.474 14:59:21 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:58.474 14:59:21 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:58.474 14:59:21 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:58.474 14:59:21 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:58.474 14:59:21 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:58.474 14:59:21 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:58.474 14:59:21 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:58.474 14:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:58.474 14:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.474 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:58.475 ************************************ 00:12:58.475 START TEST test_save_ublk_config 00:12:58.475 ************************************ 00:12:58.475 14:59:21 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:58.475 14:59:21 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:58.475 14:59:21 -- ublk/ublk.sh@103 -- # tgtpid=80519 00:12:58.475 14:59:21 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:58.475 14:59:21 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:58.475 14:59:21 -- ublk/ublk.sh@106 -- # waitforlisten 80519 00:12:58.475 14:59:21 -- common/autotest_common.sh@829 -- # '[' -z 80519 ']' 00:12:58.475 14:59:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.475 14:59:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.475 14:59:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.475 14:59:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.475 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:12:58.475 [2024-11-18 14:59:22.002361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:58.475 [2024-11-18 14:59:22.002609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80519 ] 00:12:58.733 [2024-11-18 14:59:22.151848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.733 [2024-11-18 14:59:22.204553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.733 [2024-11-18 14:59:22.204955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.300 14:59:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.300 14:59:22 -- common/autotest_common.sh@862 -- # return 0 00:12:59.300 14:59:22 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:59.300 14:59:22 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:59.300 14:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.300 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 [2024-11-18 14:59:22.813580] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:59.300 malloc0 00:12:59.300 [2024-11-18 14:59:22.845445] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:59.300 [2024-11-18 14:59:22.845532] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:59.300 [2024-11-18 14:59:22.845540] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:59.300 [2024-11-18 14:59:22.845553] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:59.300 [2024-11-18 14:59:22.854422] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:59.300 [2024-11-18 14:59:22.854452] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:59.300 [2024-11-18 14:59:22.861344] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:59.300 [2024-11-18 14:59:22.861444] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:59.300 [2024-11-18 14:59:22.878346] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:59.300 0 00:12:59.300 14:59:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.559 14:59:22 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:59.559 14:59:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.559 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.559 14:59:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.559 14:59:23 -- ublk/ublk.sh@115 -- # config='{ 00:12:59.559 "subsystems": [ 00:12:59.559 { 00:12:59.559 "subsystem": "iobuf", 00:12:59.559 "config": [ 00:12:59.559 { 00:12:59.559 "method": "iobuf_set_options", 00:12:59.559 "params": { 00:12:59.559 "small_pool_count": 8192, 00:12:59.559 "large_pool_count": 1024, 00:12:59.559 "small_bufsize": 8192, 00:12:59.559 "large_bufsize": 135168 00:12:59.559 } 00:12:59.559 } 00:12:59.559 ] 00:12:59.559 }, 00:12:59.559 { 00:12:59.559 "subsystem": "sock", 00:12:59.559 "config": [ 00:12:59.559 { 00:12:59.559 "method": "sock_impl_set_options", 00:12:59.559 "params": { 00:12:59.559 "impl_name": "posix", 00:12:59.559 "recv_buf_size": 2097152, 00:12:59.559 "send_buf_size": 2097152, 00:12:59.559 "enable_recv_pipe": true, 00:12:59.559 "enable_quickack": false, 00:12:59.559 "enable_placement_id": 0, 00:12:59.559 "enable_zerocopy_send_server": true, 00:12:59.559 "enable_zerocopy_send_client": false, 00:12:59.559 "zerocopy_threshold": 0, 00:12:59.559 "tls_version": 0, 00:12:59.559 "enable_ktls": false 00:12:59.559 } 00:12:59.559 }, 00:12:59.559 { 00:12:59.559 "method": "sock_impl_set_options", 00:12:59.559 "params": { 00:12:59.559 "impl_name": "ssl", 00:12:59.559 "recv_buf_size": 4096, 00:12:59.559 "send_buf_size": 4096, 00:12:59.559 "enable_recv_pipe": true, 00:12:59.559 "enable_quickack": false, 00:12:59.559 "enable_placement_id": 0, 00:12:59.559 "enable_zerocopy_send_server": true, 00:12:59.559 "enable_zerocopy_send_client": false, 00:12:59.559 "zerocopy_threshold": 0, 00:12:59.559 "tls_version": 0, 00:12:59.559 "enable_ktls": false 00:12:59.559 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "vmd", 00:12:59.560 "config": [] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "accel", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "accel_set_options", 00:12:59.560 "params": { 00:12:59.560 "small_cache_size": 128, 00:12:59.560 "large_cache_size": 16, 00:12:59.560 "task_count": 2048, 00:12:59.560 "sequence_count": 2048, 00:12:59.560 "buf_count": 2048 00:12:59.560 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "bdev", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "bdev_set_options", 00:12:59.560 "params": { 00:12:59.560 "bdev_io_pool_size": 65535, 00:12:59.560 "bdev_io_cache_size": 256, 00:12:59.560 "bdev_auto_examine": true, 00:12:59.560 "iobuf_small_cache_size": 128, 00:12:59.560 "iobuf_large_cache_size": 16 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_raid_set_options", 00:12:59.560 "params": { 00:12:59.560 "process_window_size_kb": 1024 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_iscsi_set_options", 00:12:59.560 "params": { 00:12:59.560 "timeout_sec": 30 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_nvme_set_options", 00:12:59.560 "params": { 00:12:59.560 "action_on_timeout": "none", 00:12:59.560 "timeout_us": 0, 00:12:59.560 "timeout_admin_us": 0, 00:12:59.560 "keep_alive_timeout_ms": 10000, 00:12:59.560 "transport_retry_count": 4, 00:12:59.560 "arbitration_burst": 0, 00:12:59.560 "low_priority_weight": 0, 00:12:59.560 "medium_priority_weight": 0, 00:12:59.560 "high_priority_weight": 0, 00:12:59.560 "nvme_adminq_poll_period_us": 10000, 00:12:59.560 "nvme_ioq_poll_period_us": 0, 00:12:59.560 "io_queue_requests": 0, 00:12:59.560 "delay_cmd_submit": true, 00:12:59.560 "bdev_retry_count": 3, 00:12:59.560 "transport_ack_timeout": 0, 00:12:59.560 "ctrlr_loss_timeout_sec": 0, 00:12:59.560 "reconnect_delay_sec": 0, 00:12:59.560 "fast_io_fail_timeout_sec": 0, 00:12:59.560 "generate_uuids": false, 00:12:59.560 "transport_tos": 0, 00:12:59.560 "io_path_stat": false, 00:12:59.560 "allow_accel_sequence": false 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_nvme_set_hotplug", 00:12:59.560 "params": { 00:12:59.560 "period_us": 100000, 00:12:59.560 "enable": false 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_malloc_create", 00:12:59.560 "params": { 00:12:59.560 "name": "malloc0", 00:12:59.560 "num_blocks": 8192, 00:12:59.560 "block_size": 4096, 00:12:59.560 "physical_block_size": 4096, 00:12:59.560 "uuid": "0896c8d4-0f2b-4a10-8a74-c15cbda57137", 00:12:59.560 "optimal_io_boundary": 0 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "bdev_wait_for_examine" 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "scsi", 00:12:59.560 "config": null 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "scheduler", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "framework_set_scheduler", 00:12:59.560 "params": { 00:12:59.560 "name": "static" 00:12:59.560 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "vhost_scsi", 00:12:59.560 "config": [] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "vhost_blk", 00:12:59.560 "config": [] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "ublk", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "ublk_create_target", 00:12:59.560 "params": { 00:12:59.560 "cpumask": "1" 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "ublk_start_disk", 00:12:59.560 "params": { 00:12:59.560 "bdev_name": "malloc0", 00:12:59.560 "ublk_id": 0, 00:12:59.560 "num_queues": 1, 00:12:59.560 "queue_depth": 128 00:12:59.560 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "nbd", 00:12:59.560 "config": [] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "nvmf", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "nvmf_set_config", 00:12:59.560 "params": { 00:12:59.560 "discovery_filter": "match_any", 00:12:59.560 "admin_cmd_passthru": { 00:12:59.560 "identify_ctrlr": false 00:12:59.560 } 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "nvmf_set_max_subsystems", 00:12:59.560 "params": { 00:12:59.560 "max_subsystems": 1024 00:12:59.560 } 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "method": "nvmf_set_crdt", 00:12:59.560 "params": { 00:12:59.560 "crdt1": 0, 00:12:59.560 "crdt2": 0, 00:12:59.560 "crdt3": 0 00:12:59.560 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }, 00:12:59.560 { 00:12:59.560 "subsystem": "iscsi", 00:12:59.560 "config": [ 00:12:59.560 { 00:12:59.560 "method": "iscsi_set_options", 00:12:59.560 "params": { 00:12:59.560 "node_base": "iqn.2016-06.io.spdk", 00:12:59.560 "max_sessions": 128, 00:12:59.560 "max_connections_per_session": 2, 00:12:59.560 "max_queue_depth": 64, 00:12:59.560 "default_time2wait": 2, 00:12:59.560 "default_time2retain": 20, 00:12:59.560 "first_burst_length": 8192, 00:12:59.560 "immediate_data": true, 00:12:59.560 "allow_duplicated_isid": false, 00:12:59.560 "error_recovery_level": 0, 00:12:59.560 "nop_timeout": 60, 00:12:59.560 "nop_in_interval": 30, 00:12:59.560 "disable_chap": false, 00:12:59.560 "require_chap": false, 00:12:59.560 "mutual_chap": false, 00:12:59.560 "chap_group": 0, 00:12:59.560 "max_large_datain_per_connection": 64, 00:12:59.560 "max_r2t_per_connection": 4, 00:12:59.560 "pdu_pool_size": 36864, 00:12:59.560 "immediate_data_pool_size": 16384, 00:12:59.560 "data_out_pool_size": 2048 00:12:59.560 } 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 } 00:12:59.560 ] 00:12:59.560 }' 00:12:59.560 14:59:23 -- ublk/ublk.sh@116 -- # killprocess 80519 00:12:59.560 14:59:23 -- common/autotest_common.sh@936 -- # '[' -z 80519 ']' 00:12:59.560 14:59:23 -- common/autotest_common.sh@940 -- # kill -0 80519 00:12:59.560 14:59:23 -- common/autotest_common.sh@941 -- # uname 00:12:59.560 14:59:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.560 14:59:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80519 00:12:59.819 killing process with pid 80519 00:12:59.819 14:59:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:59.819 14:59:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:59.819 14:59:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80519' 00:12:59.819 14:59:23 -- common/autotest_common.sh@955 -- # kill 80519 00:12:59.819 14:59:23 -- common/autotest_common.sh@960 -- # wait 80519 00:12:59.819 [2024-11-18 14:59:23.390014] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:00.078 [2024-11-18 14:59:23.417432] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:00.078 [2024-11-18 14:59:23.417562] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:00.078 [2024-11-18 14:59:23.427344] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:00.078 [2024-11-18 14:59:23.427399] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:00.078 [2024-11-18 14:59:23.427407] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:00.078 [2024-11-18 14:59:23.427438] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:00.078 [2024-11-18 14:59:23.427575] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:00.352 14:59:23 -- ublk/ublk.sh@119 -- # tgtpid=80557 00:13:00.352 14:59:23 -- ublk/ublk.sh@121 -- # waitforlisten 80557 00:13:00.352 14:59:23 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:00.352 14:59:23 -- common/autotest_common.sh@829 -- # '[' -z 80557 ']' 00:13:00.352 14:59:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.352 14:59:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.352 14:59:23 -- ublk/ublk.sh@118 -- # echo '{ 00:13:00.352 "subsystems": [ 00:13:00.352 { 00:13:00.352 "subsystem": "iobuf", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "iobuf_set_options", 00:13:00.352 "params": { 00:13:00.352 "small_pool_count": 8192, 00:13:00.352 "large_pool_count": 1024, 00:13:00.352 "small_bufsize": 8192, 00:13:00.352 "large_bufsize": 135168 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "sock", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "sock_impl_set_options", 00:13:00.352 "params": { 00:13:00.352 "impl_name": "posix", 00:13:00.352 "recv_buf_size": 2097152, 00:13:00.352 "send_buf_size": 2097152, 00:13:00.352 "enable_recv_pipe": true, 00:13:00.352 "enable_quickack": false, 00:13:00.352 "enable_placement_id": 0, 00:13:00.352 "enable_zerocopy_send_server": true, 00:13:00.352 "enable_zerocopy_send_client": false, 00:13:00.352 "zerocopy_threshold": 0, 00:13:00.352 "tls_version": 0, 00:13:00.352 "enable_ktls": false 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "sock_impl_set_options", 00:13:00.352 "params": { 00:13:00.352 "impl_name": "ssl", 00:13:00.352 "recv_buf_size": 4096, 00:13:00.352 "send_buf_size": 4096, 00:13:00.352 "enable_recv_pipe": true, 00:13:00.352 "enable_quickack": false, 00:13:00.352 "enable_placement_id": 0, 00:13:00.352 "enable_zerocopy_send_server": true, 00:13:00.352 "enable_zerocopy_send_client": false, 00:13:00.352 "zerocopy_threshold": 0, 00:13:00.352 "tls_version": 0, 00:13:00.352 "enable_ktls": false 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "vmd", 00:13:00.352 "config": [] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "accel", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "accel_set_options", 00:13:00.352 "params": { 00:13:00.352 "small_cache_size": 128, 00:13:00.352 "large_cache_size": 16, 00:13:00.352 "task_count": 2048, 00:13:00.352 "sequence_count": 2048, 00:13:00.352 "buf_count": 2048 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "bdev", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "bdev_set_options", 00:13:00.352 "params": { 00:13:00.352 "bdev_io_pool_size": 65535, 00:13:00.352 "bdev_io_cache_size": 256, 00:13:00.352 "bdev_auto_examine": true, 00:13:00.352 "iobuf_small_cache_size": 128, 00:13:00.352 "iobuf_large_cache_size": 16 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_raid_set_options", 00:13:00.352 "params": { 00:13:00.352 "process_window_size_kb": 1024 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_iscsi_set_options", 00:13:00.352 "params": { 00:13:00.352 "timeout_sec": 30 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_nvme_set_options", 00:13:00.352 "params": { 00:13:00.352 "action_on_timeout": "none", 00:13:00.352 "timeout_us": 0, 00:13:00.352 "timeout_admin_us": 0, 00:13:00.352 "keep_alive_timeout_ms": 10000, 00:13:00.352 "transport_retry_count": 4, 00:13:00.352 "arbitration_burst": 0, 00:13:00.352 "low_priority_weight": 0, 00:13:00.352 "medium_priority_weight": 0, 00:13:00.352 "high_priority_weight": 0, 00:13:00.352 "nvme_adminq_poll_period_us": 10000, 00:13:00.352 "nvme_ioq_poll_period_us": 0, 00:13:00.352 "io_queue_requests": 0, 00:13:00.352 "delay_cmd_submit": true, 00:13:00.352 "bdev_retry_count": 3, 00:13:00.352 "transport_ack_timeout": 0, 00:13:00.352 "ctrlr_loss_timeout_sec": 0, 00:13:00.352 "reconnect_delay_sec": 0, 00:13:00.352 "fast_io_fail_timeout_sec": 0, 00:13:00.352 "generate_uuids": false, 00:13:00.352 "transport_tos": 0, 00:13:00.352 "io_path_stat": false, 00:13:00.352 "allow_accel_sequence": false 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_nvme_set_hotplug", 00:13:00.352 "params": { 00:13:00.352 "period_us": 100000, 00:13:00.352 "enable": false 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_malloc_create", 00:13:00.352 "params": { 00:13:00.352 "name": "malloc0", 00:13:00.352 "num_blocks": 8192, 00:13:00.352 "block_size": 4096, 00:13:00.352 "physical_block_size": 4096, 00:13:00.352 "uuid": "0896c8d4-0f2b-4a10-8a74-c15cbda57137", 00:13:00.352 "optimal_io_boundary": 0 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "bdev_wait_for_examine" 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "scsi", 00:13:00.352 "config": null 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "scheduler", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "framework_set_scheduler", 00:13:00.352 "params": { 00:13:00.352 "name": "static" 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "vhost_scsi", 00:13:00.352 "config": [] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "vhost_blk", 00:13:00.352 "config": [] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "ublk", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "ublk_create_target", 00:13:00.352 "params": { 00:13:00.352 "cpumask": "1" 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "ublk_start_disk", 00:13:00.352 "params": { 00:13:00.352 "bdev_name": "malloc0", 00:13:00.352 "ublk_id": 0, 00:13:00.352 "num_queues": 1, 00:13:00.352 "queue_depth": 128 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "nbd", 00:13:00.352 "config": [] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "nvmf", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "nvmf_set_config", 00:13:00.352 "params": { 00:13:00.352 "discovery_filter": "match_any", 00:13:00.352 "admin_cmd_passthru": { 00:13:00.352 "identify_ctrlr": false 00:13:00.352 } 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "nvmf_set_max_subsystems", 00:13:00.352 "params": { 00:13:00.352 "max_subsystems": 1024 00:13:00.352 } 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "method": "nvmf_set_crdt", 00:13:00.352 "params": { 00:13:00.352 "crdt1": 0, 00:13:00.352 "crdt2": 0, 00:13:00.352 "crdt3": 0 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }, 00:13:00.352 { 00:13:00.352 "subsystem": "iscsi", 00:13:00.352 "config": [ 00:13:00.352 { 00:13:00.352 "method": "iscsi_set_options", 00:13:00.352 "params": { 00:13:00.352 "node_base": "iqn.2016-06.io.spdk", 00:13:00.352 "max_sessions": 128, 00:13:00.352 "max_connections_per_session": 2, 00:13:00.352 "max_queue_depth": 64, 00:13:00.352 "default_time2wait": 2, 00:13:00.352 "default_time2retain": 20, 00:13:00.352 "first_burst_length": 8192, 00:13:00.352 "immediate_data": true, 00:13:00.352 "allow_duplicated_isid": false, 00:13:00.352 "error_recovery_level": 0, 00:13:00.352 "nop_timeout": 60, 00:13:00.352 "nop_in_interval": 30, 00:13:00.352 "disable_chap": false, 00:13:00.352 "require_chap": false, 00:13:00.352 "mutual_chap": false, 00:13:00.352 "chap_group": 0, 00:13:00.352 "max_large_datain_per_connection": 64, 00:13:00.352 "max_r2t_per_connection": 4, 00:13:00.352 "pdu_pool_size": 36864, 00:13:00.352 "immediate_data_pool_size": 16384, 00:13:00.352 "data_out_pool_size": 2048 00:13:00.352 } 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 } 00:13:00.352 ] 00:13:00.352 }' 00:13:00.352 14:59:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.352 14:59:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.352 14:59:23 -- common/autotest_common.sh@10 -- # set +x 00:13:00.352 [2024-11-18 14:59:23.870509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.353 [2024-11-18 14:59:23.870623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80557 ] 00:13:00.678 [2024-11-18 14:59:24.019489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.678 [2024-11-18 14:59:24.070639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.678 [2024-11-18 14:59:24.070928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.937 [2024-11-18 14:59:24.380616] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:00.937 [2024-11-18 14:59:24.388434] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:00.937 [2024-11-18 14:59:24.388519] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:00.937 [2024-11-18 14:59:24.388527] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:00.937 [2024-11-18 14:59:24.388537] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:00.937 [2024-11-18 14:59:24.397418] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:00.937 [2024-11-18 14:59:24.397441] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:00.937 [2024-11-18 14:59:24.404341] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:00.937 [2024-11-18 14:59:24.404430] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:00.937 [2024-11-18 14:59:24.421338] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:01.196 14:59:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.196 14:59:24 -- common/autotest_common.sh@862 -- # return 0 00:13:01.196 14:59:24 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:01.196 14:59:24 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:01.196 14:59:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.196 14:59:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.196 14:59:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.196 14:59:24 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:01.196 14:59:24 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:01.196 14:59:24 -- ublk/ublk.sh@125 -- # killprocess 80557 00:13:01.196 14:59:24 -- common/autotest_common.sh@936 -- # '[' -z 80557 ']' 00:13:01.196 14:59:24 -- common/autotest_common.sh@940 -- # kill -0 80557 00:13:01.196 14:59:24 -- common/autotest_common.sh@941 -- # uname 00:13:01.196 14:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:01.196 14:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80557 00:13:01.196 killing process with pid 80557 00:13:01.196 14:59:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:01.196 14:59:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:01.196 14:59:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80557' 00:13:01.196 14:59:24 -- common/autotest_common.sh@955 -- # kill 80557 00:13:01.196 14:59:24 -- common/autotest_common.sh@960 -- # wait 80557 00:13:01.455 [2024-11-18 14:59:24.978274] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:01.455 [2024-11-18 14:59:25.017366] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:01.455 [2024-11-18 14:59:25.017491] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:01.455 [2024-11-18 14:59:25.025344] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:01.455 [2024-11-18 14:59:25.025398] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:01.455 [2024-11-18 14:59:25.025406] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:01.455 [2024-11-18 14:59:25.025436] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:01.455 [2024-11-18 14:59:25.025586] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:02.023 14:59:25 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:02.023 00:13:02.023 real 0m3.477s 00:13:02.023 user 0m2.409s 00:13:02.023 sys 0m1.642s 00:13:02.023 ************************************ 00:13:02.023 END TEST test_save_ublk_config 00:13:02.023 ************************************ 00:13:02.023 14:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.023 14:59:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.023 14:59:25 -- ublk/ublk.sh@139 -- # spdk_pid=80607 00:13:02.023 14:59:25 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:02.023 14:59:25 -- ublk/ublk.sh@141 -- # waitforlisten 80607 00:13:02.023 14:59:25 -- common/autotest_common.sh@829 -- # '[' -z 80607 ']' 00:13:02.023 14:59:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.023 14:59:25 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:02.023 14:59:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.023 14:59:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.023 14:59:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.023 14:59:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.023 [2024-11-18 14:59:25.507060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:02.023 [2024-11-18 14:59:25.507782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80607 ] 00:13:02.281 [2024-11-18 14:59:25.657132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:02.281 [2024-11-18 14:59:25.698186] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.281 [2024-11-18 14:59:25.698726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.281 [2024-11-18 14:59:25.698780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.853 14:59:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.853 14:59:26 -- common/autotest_common.sh@862 -- # return 0 00:13:02.853 14:59:26 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:02.853 14:59:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.853 14:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.853 14:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 ************************************ 00:13:02.853 START TEST test_create_ublk 00:13:02.853 ************************************ 00:13:02.853 14:59:26 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:02.853 14:59:26 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:02.853 14:59:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.853 14:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 [2024-11-18 14:59:26.334524] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:02.853 14:59:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.853 14:59:26 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:02.853 14:59:26 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:02.853 14:59:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.853 14:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 14:59:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.853 14:59:26 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:02.853 14:59:26 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:02.853 14:59:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.853 14:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 [2024-11-18 14:59:26.397512] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:02.853 [2024-11-18 14:59:26.397898] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:02.853 [2024-11-18 14:59:26.397912] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:02.853 [2024-11-18 14:59:26.397922] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:02.853 [2024-11-18 14:59:26.406575] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:02.853 [2024-11-18 14:59:26.406614] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:02.853 [2024-11-18 14:59:26.413344] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:02.853 [2024-11-18 14:59:26.420412] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:03.114 [2024-11-18 14:59:26.443345] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:03.114 14:59:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:03.114 14:59:26 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:03.114 14:59:26 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:03.114 14:59:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.114 14:59:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.114 14:59:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:03.114 { 00:13:03.114 "ublk_device": "/dev/ublkb0", 00:13:03.114 "id": 0, 00:13:03.114 "queue_depth": 512, 00:13:03.114 "num_queues": 4, 00:13:03.114 "bdev_name": "Malloc0" 00:13:03.114 } 00:13:03.114 ]' 00:13:03.114 14:59:26 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:03.114 14:59:26 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:03.114 14:59:26 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:03.114 14:59:26 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:03.114 14:59:26 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:03.114 14:59:26 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:03.114 14:59:26 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:03.114 14:59:26 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:03.114 14:59:26 -- lvol/common.sh@41 -- # local offset=0 00:13:03.114 14:59:26 -- lvol/common.sh@42 -- # local size=134217728 00:13:03.114 14:59:26 -- lvol/common.sh@43 -- # local rw=write 00:13:03.114 14:59:26 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:03.114 14:59:26 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:03.114 14:59:26 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:03.114 14:59:26 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:03.114 14:59:26 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:03.114 14:59:26 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:03.114 14:59:26 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:03.375 fio: verification read phase will never start because write phase uses all of runtime 00:13:03.375 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:03.375 fio-3.35 00:13:03.375 Starting 1 process 00:13:13.359 00:13:13.359 fio_test: (groupid=0, jobs=1): err= 0: pid=80652: Mon Nov 18 14:59:36 2024 00:13:13.359 write: IOPS=20.7k, BW=80.9MiB/s (84.8MB/s)(809MiB/10002msec); 0 zone resets 00:13:13.359 clat (usec): min=33, max=3944, avg=47.51, stdev=79.66 00:13:13.359 lat (usec): min=33, max=3944, avg=47.97, stdev=79.68 00:13:13.359 clat percentiles (usec): 00:13:13.359 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 41], 00:13:13.359 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:13:13.359 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 52], 95.00th=[ 57], 00:13:13.359 | 99.00th=[ 65], 99.50th=[ 73], 99.90th=[ 1287], 99.95th=[ 2376], 00:13:13.359 | 99.99th=[ 3458] 00:13:13.359 bw ( KiB/s): min=74704, max=87112, per=99.93%, avg=82734.74, stdev=3277.66, samples=19 00:13:13.359 iops : min=18676, max=21778, avg=20683.68, stdev=819.42, samples=19 00:13:13.359 lat (usec) : 50=86.72%, 100=13.00%, 250=0.12%, 500=0.03%, 750=0.01% 00:13:13.359 lat (usec) : 1000=0.01% 00:13:13.359 lat (msec) : 2=0.04%, 4=0.07% 00:13:13.359 cpu : usr=3.44%, sys=15.48%, ctx=207042, majf=0, minf=795 00:13:13.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.359 issued rwts: total=0,207029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.359 00:13:13.359 Run status group 0 (all jobs): 00:13:13.359 WRITE: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=809MiB (848MB), run=10002-10002msec 00:13:13.359 00:13:13.359 Disk stats (read/write): 00:13:13.359 ublkb0: ios=0/204733, merge=0/0, ticks=0/8124, in_queue=8124, util=98.97% 00:13:13.359 14:59:36 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:13.359 14:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.359 14:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.359 [2024-11-18 14:59:36.863763] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:13.359 [2024-11-18 14:59:36.915380] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:13.359 [2024-11-18 14:59:36.915962] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:13.359 [2024-11-18 14:59:36.923346] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:13.359 [2024-11-18 14:59:36.923577] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:13.359 [2024-11-18 14:59:36.923584] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:13.359 14:59:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.359 14:59:36 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:13.359 14:59:36 -- common/autotest_common.sh@650 -- # local es=0 00:13:13.359 14:59:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:13.359 14:59:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:13.359 14:59:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.359 14:59:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:13.359 14:59:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.359 14:59:36 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:13.359 14:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.359 14:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.359 [2024-11-18 14:59:36.939420] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:13.618 request: 00:13:13.618 { 00:13:13.618 "ublk_id": 0, 00:13:13.618 "method": "ublk_stop_disk", 00:13:13.618 "req_id": 1 00:13:13.618 } 00:13:13.618 Got JSON-RPC error response 00:13:13.618 response: 00:13:13.618 { 00:13:13.618 "code": -19, 00:13:13.618 "message": "No such device" 00:13:13.618 } 00:13:13.618 14:59:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:13.618 14:59:36 -- common/autotest_common.sh@653 -- # es=1 00:13:13.618 14:59:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.618 14:59:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.618 14:59:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.618 14:59:36 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:13.618 14:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.618 14:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 [2024-11-18 14:59:36.955398] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:13.618 [2024-11-18 14:59:36.956864] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:13.618 [2024-11-18 14:59:36.956894] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:13.618 14:59:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.618 14:59:36 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:13.618 14:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.618 14:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.618 14:59:37 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:13.618 14:59:37 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:13.618 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.618 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.618 14:59:37 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:13.618 14:59:37 -- lvol/common.sh@26 -- # jq length 00:13:13.618 14:59:37 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:13.618 14:59:37 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:13.618 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.618 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.618 14:59:37 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:13.618 14:59:37 -- lvol/common.sh@28 -- # jq length 00:13:13.618 ************************************ 00:13:13.618 END TEST test_create_ublk 00:13:13.618 ************************************ 00:13:13.618 14:59:37 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:13.618 00:13:13.618 real 0m10.802s 00:13:13.618 user 0m0.640s 00:13:13.618 sys 0m1.619s 00:13:13.618 14:59:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.618 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 14:59:37 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:13.618 14:59:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:13.618 14:59:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.618 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.618 ************************************ 00:13:13.619 START TEST test_create_multi_ublk 00:13:13.619 ************************************ 00:13:13.619 14:59:37 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:13.619 14:59:37 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:13.619 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.619 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.619 [2024-11-18 14:59:37.178341] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:13.619 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.619 14:59:37 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:13.619 14:59:37 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:13.619 14:59:37 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:13.619 14:59:37 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:13.619 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.619 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.877 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.877 14:59:37 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:13.877 14:59:37 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:13.877 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.877 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.877 [2024-11-18 14:59:37.261509] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:13.877 [2024-11-18 14:59:37.261862] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:13.877 [2024-11-18 14:59:37.261875] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:13.877 [2024-11-18 14:59:37.261888] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:13.877 [2024-11-18 14:59:37.285335] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:13.877 [2024-11-18 14:59:37.285359] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:13.877 [2024-11-18 14:59:37.297333] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:13.877 [2024-11-18 14:59:37.297851] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:13.877 [2024-11-18 14:59:37.316572] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:13.877 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.877 14:59:37 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:13.877 14:59:37 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:13.877 14:59:37 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:13.877 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.877 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.877 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.877 14:59:37 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:13.877 14:59:37 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:13.877 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.877 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:13.877 [2024-11-18 14:59:37.423436] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:13.877 [2024-11-18 14:59:37.423739] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:13.877 [2024-11-18 14:59:37.423752] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:13.878 [2024-11-18 14:59:37.423757] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:13.878 [2024-11-18 14:59:37.435354] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:13.878 [2024-11-18 14:59:37.435369] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:13.878 [2024-11-18 14:59:37.447359] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:13.878 [2024-11-18 14:59:37.447871] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:13.878 [2024-11-18 14:59:37.460365] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.136 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.137 14:59:37 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:14.137 14:59:37 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.137 14:59:37 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:14.137 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.137 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.137 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.137 14:59:37 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:14.137 14:59:37 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:14.137 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.137 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.137 [2024-11-18 14:59:37.567418] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:14.137 [2024-11-18 14:59:37.567725] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:14.137 [2024-11-18 14:59:37.567737] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:14.137 [2024-11-18 14:59:37.567743] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:14.137 [2024-11-18 14:59:37.579347] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:14.137 [2024-11-18 14:59:37.579366] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:14.137 [2024-11-18 14:59:37.591337] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:14.137 [2024-11-18 14:59:37.591850] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:14.137 [2024-11-18 14:59:37.604350] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.137 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.137 14:59:37 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:14.137 14:59:37 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.137 14:59:37 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:14.137 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.137 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.137 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.137 14:59:37 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:14.137 14:59:37 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:14.137 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.137 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.137 [2024-11-18 14:59:37.711436] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:14.137 [2024-11-18 14:59:37.711738] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:14.137 [2024-11-18 14:59:37.711751] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:14.137 [2024-11-18 14:59:37.711757] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:14.137 [2024-11-18 14:59:37.723365] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:14.137 [2024-11-18 14:59:37.723380] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:14.395 [2024-11-18 14:59:37.735345] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:14.395 [2024-11-18 14:59:37.735848] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:14.395 [2024-11-18 14:59:37.741172] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.395 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:14.395 14:59:37 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:14.395 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.395 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:14.395 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:14.395 { 00:13:14.395 "ublk_device": "/dev/ublkb0", 00:13:14.395 "id": 0, 00:13:14.395 "queue_depth": 512, 00:13:14.395 "num_queues": 4, 00:13:14.395 "bdev_name": "Malloc0" 00:13:14.395 }, 00:13:14.395 { 00:13:14.395 "ublk_device": "/dev/ublkb1", 00:13:14.395 "id": 1, 00:13:14.395 "queue_depth": 512, 00:13:14.395 "num_queues": 4, 00:13:14.395 "bdev_name": "Malloc1" 00:13:14.395 }, 00:13:14.395 { 00:13:14.395 "ublk_device": "/dev/ublkb2", 00:13:14.395 "id": 2, 00:13:14.395 "queue_depth": 512, 00:13:14.395 "num_queues": 4, 00:13:14.395 "bdev_name": "Malloc2" 00:13:14.395 }, 00:13:14.395 { 00:13:14.395 "ublk_device": "/dev/ublkb3", 00:13:14.395 "id": 3, 00:13:14.395 "queue_depth": 512, 00:13:14.395 "num_queues": 4, 00:13:14.395 "bdev_name": "Malloc3" 00:13:14.395 } 00:13:14.395 ]' 00:13:14.395 14:59:37 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:14.395 14:59:37 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.395 14:59:37 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:14.395 14:59:37 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:14.395 14:59:37 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:14.395 14:59:37 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:14.395 14:59:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:14.395 14:59:37 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.395 14:59:37 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:14.395 14:59:37 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:14.395 14:59:37 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:14.653 14:59:38 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:14.653 14:59:38 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:14.653 14:59:38 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:14.653 14:59:38 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.653 14:59:38 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:14.653 14:59:38 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:14.653 14:59:38 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:14.653 14:59:38 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:14.653 14:59:38 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:14.911 14:59:38 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:14.911 14:59:38 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.911 14:59:38 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:14.911 14:59:38 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:14.911 14:59:38 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:14.911 14:59:38 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:14.911 14:59:38 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:14.911 14:59:38 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:14.911 14:59:38 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:14.911 14:59:38 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:14.911 14:59:38 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:14.911 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.911 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:14.911 [2024-11-18 14:59:38.435407] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:14.911 [2024-11-18 14:59:38.484836] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:14.911 [2024-11-18 14:59:38.485796] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.170 [2024-11-18 14:59:38.502352] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.170 [2024-11-18 14:59:38.502585] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:15.170 [2024-11-18 14:59:38.502600] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:15.170 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.170 14:59:38 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.170 14:59:38 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:15.170 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.170 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.170 [2024-11-18 14:59:38.527396] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.170 [2024-11-18 14:59:38.567374] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.170 [2024-11-18 14:59:38.567981] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.170 [2024-11-18 14:59:38.579337] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.170 [2024-11-18 14:59:38.579570] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:15.170 [2024-11-18 14:59:38.579579] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:15.170 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.170 14:59:38 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.170 14:59:38 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:15.170 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.170 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.170 [2024-11-18 14:59:38.591410] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.170 [2024-11-18 14:59:38.639372] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.170 [2024-11-18 14:59:38.639967] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.170 [2024-11-18 14:59:38.649353] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.170 [2024-11-18 14:59:38.649563] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:15.170 [2024-11-18 14:59:38.649575] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:15.170 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.170 14:59:38 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.170 14:59:38 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:15.170 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.170 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.170 [2024-11-18 14:59:38.675390] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:15.170 [2024-11-18 14:59:38.718365] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:15.170 [2024-11-18 14:59:38.718934] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:15.170 [2024-11-18 14:59:38.730365] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:15.170 [2024-11-18 14:59:38.730577] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:15.170 [2024-11-18 14:59:38.730585] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:15.170 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.170 14:59:38 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:15.429 [2024-11-18 14:59:38.921380] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:15.429 [2024-11-18 14:59:38.922628] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:15.429 [2024-11-18 14:59:38.922653] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:15.429 14:59:38 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:15.429 14:59:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.429 14:59:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:15.429 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.429 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:15.688 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.688 14:59:39 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.688 14:59:39 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:15.688 14:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.688 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.688 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.688 14:59:39 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.688 14:59:39 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:15.688 14:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.688 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.688 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.688 14:59:39 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:15.688 14:59:39 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:15.688 14:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.688 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.688 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.688 14:59:39 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:15.688 14:59:39 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:15.688 14:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.688 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.688 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.688 14:59:39 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:15.688 14:59:39 -- lvol/common.sh@26 -- # jq length 00:13:15.946 14:59:39 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:15.946 14:59:39 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:15.946 14:59:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.946 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.946 14:59:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.946 14:59:39 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:15.946 14:59:39 -- lvol/common.sh@28 -- # jq length 00:13:15.946 ************************************ 00:13:15.946 END TEST test_create_multi_ublk 00:13:15.946 ************************************ 00:13:15.946 14:59:39 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:15.946 00:13:15.946 real 0m2.175s 00:13:15.946 user 0m0.789s 00:13:15.946 sys 0m0.170s 00:13:15.946 14:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.946 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.946 14:59:39 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:15.946 14:59:39 -- ublk/ublk.sh@147 -- # cleanup 00:13:15.946 14:59:39 -- ublk/ublk.sh@130 -- # killprocess 80607 00:13:15.946 14:59:39 -- common/autotest_common.sh@936 -- # '[' -z 80607 ']' 00:13:15.946 14:59:39 -- common/autotest_common.sh@940 -- # kill -0 80607 00:13:15.946 14:59:39 -- common/autotest_common.sh@941 -- # uname 00:13:15.946 14:59:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.946 14:59:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80607 00:13:15.946 killing process with pid 80607 00:13:15.946 14:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.947 14:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.947 14:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80607' 00:13:15.947 14:59:39 -- common/autotest_common.sh@955 -- # kill 80607 00:13:15.947 14:59:39 -- common/autotest_common.sh@960 -- # wait 80607 00:13:16.205 [2024-11-18 14:59:39.618637] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:16.205 [2024-11-18 14:59:39.618692] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:16.465 00:13:16.465 real 0m18.092s 00:13:16.465 user 0m27.924s 00:13:16.465 sys 0m8.255s 00:13:16.465 14:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:16.465 ************************************ 00:13:16.465 END TEST ublk 00:13:16.465 ************************************ 00:13:16.465 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:16.465 14:59:39 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:16.465 14:59:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:16.465 14:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.465 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:16.465 ************************************ 00:13:16.465 START TEST ublk_recovery 00:13:16.465 ************************************ 00:13:16.465 14:59:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:16.465 * Looking for test storage... 00:13:16.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:16.465 14:59:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:16.465 14:59:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:16.465 14:59:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:16.465 14:59:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:16.465 14:59:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:16.465 14:59:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:16.465 14:59:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:16.465 14:59:40 -- scripts/common.sh@335 -- # IFS=.-: 00:13:16.465 14:59:40 -- scripts/common.sh@335 -- # read -ra ver1 00:13:16.465 14:59:40 -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.465 14:59:40 -- scripts/common.sh@336 -- # read -ra ver2 00:13:16.465 14:59:40 -- scripts/common.sh@337 -- # local 'op=<' 00:13:16.465 14:59:40 -- scripts/common.sh@339 -- # ver1_l=2 00:13:16.465 14:59:40 -- scripts/common.sh@340 -- # ver2_l=1 00:13:16.465 14:59:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:16.465 14:59:40 -- scripts/common.sh@343 -- # case "$op" in 00:13:16.465 14:59:40 -- scripts/common.sh@344 -- # : 1 00:13:16.465 14:59:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:16.465 14:59:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.465 14:59:40 -- scripts/common.sh@364 -- # decimal 1 00:13:16.465 14:59:40 -- scripts/common.sh@352 -- # local d=1 00:13:16.465 14:59:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.465 14:59:40 -- scripts/common.sh@354 -- # echo 1 00:13:16.465 14:59:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:16.465 14:59:40 -- scripts/common.sh@365 -- # decimal 2 00:13:16.465 14:59:40 -- scripts/common.sh@352 -- # local d=2 00:13:16.465 14:59:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.465 14:59:40 -- scripts/common.sh@354 -- # echo 2 00:13:16.465 14:59:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:16.465 14:59:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:16.465 14:59:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:16.465 14:59:40 -- scripts/common.sh@367 -- # return 0 00:13:16.465 14:59:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.465 14:59:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:16.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.465 --rc genhtml_branch_coverage=1 00:13:16.465 --rc genhtml_function_coverage=1 00:13:16.465 --rc genhtml_legend=1 00:13:16.465 --rc geninfo_all_blocks=1 00:13:16.465 --rc geninfo_unexecuted_blocks=1 00:13:16.465 00:13:16.465 ' 00:13:16.465 14:59:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:16.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.465 --rc genhtml_branch_coverage=1 00:13:16.465 --rc genhtml_function_coverage=1 00:13:16.465 --rc genhtml_legend=1 00:13:16.465 --rc geninfo_all_blocks=1 00:13:16.465 --rc geninfo_unexecuted_blocks=1 00:13:16.465 00:13:16.465 ' 00:13:16.465 14:59:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:16.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.465 --rc genhtml_branch_coverage=1 00:13:16.465 --rc genhtml_function_coverage=1 00:13:16.465 --rc genhtml_legend=1 00:13:16.465 --rc geninfo_all_blocks=1 00:13:16.465 --rc geninfo_unexecuted_blocks=1 00:13:16.465 00:13:16.465 ' 00:13:16.465 14:59:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:16.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.465 --rc genhtml_branch_coverage=1 00:13:16.465 --rc genhtml_function_coverage=1 00:13:16.465 --rc genhtml_legend=1 00:13:16.465 --rc geninfo_all_blocks=1 00:13:16.465 --rc geninfo_unexecuted_blocks=1 00:13:16.465 00:13:16.465 ' 00:13:16.465 14:59:40 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:16.465 14:59:40 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:16.465 14:59:40 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:16.465 14:59:40 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:16.465 14:59:40 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:16.465 14:59:40 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:16.465 14:59:40 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:16.465 14:59:40 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:16.465 14:59:40 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:16.465 14:59:40 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:16.725 14:59:40 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=80972 00:13:16.725 14:59:40 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:16.725 14:59:40 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.725 14:59:40 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 80972 00:13:16.725 14:59:40 -- common/autotest_common.sh@829 -- # '[' -z 80972 ']' 00:13:16.725 14:59:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.725 14:59:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.725 14:59:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.725 14:59:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.725 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:16.725 [2024-11-18 14:59:40.120974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:16.725 [2024-11-18 14:59:40.121246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80972 ] 00:13:16.725 [2024-11-18 14:59:40.266202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:16.725 [2024-11-18 14:59:40.304775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.725 [2024-11-18 14:59:40.305221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.725 [2024-11-18 14:59:40.305292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.659 14:59:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.659 14:59:40 -- common/autotest_common.sh@862 -- # return 0 00:13:17.659 14:59:40 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:17.659 14:59:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.659 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:17.659 [2024-11-18 14:59:40.944413] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:17.659 14:59:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.659 14:59:40 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:17.659 14:59:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.659 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:17.659 malloc0 00:13:17.659 14:59:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.659 14:59:40 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:17.659 14:59:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.659 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:17.659 [2024-11-18 14:59:40.983437] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:17.659 [2024-11-18 14:59:40.983525] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:17.659 [2024-11-18 14:59:40.983537] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:17.659 [2024-11-18 14:59:40.983545] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:17.659 [2024-11-18 14:59:40.992425] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:17.659 [2024-11-18 14:59:40.992449] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:17.659 [2024-11-18 14:59:40.997336] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:17.659 [2024-11-18 14:59:40.997454] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:17.659 [2024-11-18 14:59:41.005341] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:17.659 1 00:13:17.659 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.659 14:59:41 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:18.592 14:59:42 -- ublk/ublk_recovery.sh@31 -- # fio_proc=81005 00:13:18.592 14:59:42 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:18.592 14:59:42 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:18.592 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.592 fio-3.35 00:13:18.592 Starting 1 process 00:13:23.858 14:59:47 -- ublk/ublk_recovery.sh@36 -- # kill -9 80972 00:13:23.858 14:59:47 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:29.123 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 80972 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:29.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.123 14:59:52 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=81110 00:13:29.123 14:59:52 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.123 14:59:52 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 81110 00:13:29.123 14:59:52 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:29.123 14:59:52 -- common/autotest_common.sh@829 -- # '[' -z 81110 ']' 00:13:29.123 14:59:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.123 14:59:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.123 14:59:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.123 14:59:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.123 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.123 [2024-11-18 14:59:52.090590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:29.123 [2024-11-18 14:59:52.090879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81110 ] 00:13:29.123 [2024-11-18 14:59:52.239754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.123 [2024-11-18 14:59:52.281135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.123 [2024-11-18 14:59:52.281607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.123 [2024-11-18 14:59:52.281699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.381 14:59:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.381 14:59:52 -- common/autotest_common.sh@862 -- # return 0 00:13:29.381 14:59:52 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:29.381 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 [2024-11-18 14:59:52.914588] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:29.381 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 14:59:52 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:29.381 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 malloc0 00:13:29.381 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 14:59:52 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:29.381 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 [2024-11-18 14:59:52.953499] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:29.381 [2024-11-18 14:59:52.953551] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:29.381 [2024-11-18 14:59:52.953570] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:29.381 [2024-11-18 14:59:52.961372] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:29.381 [2024-11-18 14:59:52.961394] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:29.381 1 00:13:29.381 [2024-11-18 14:59:52.961473] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:29.381 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 14:59:52 -- ublk/ublk_recovery.sh@52 -- # wait 81005 00:13:55.915 [2024-11-18 15:00:17.412343] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:55.915 [2024-11-18 15:00:17.415717] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:55.915 [2024-11-18 15:00:17.422517] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:55.915 [2024-11-18 15:00:17.422537] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:22.542 00:14:22.542 fio_test: (groupid=0, jobs=1): err= 0: pid=81008: Mon Nov 18 15:00:42 2024 00:14:22.542 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(3462MiB/60002msec) 00:14:22.542 slat (nsec): min=1031, max=322240, avg=5406.02, stdev=1704.48 00:14:22.542 clat (usec): min=737, max=30414k, avg=4216.89, stdev=254337.68 00:14:22.542 lat (usec): min=749, max=30414k, avg=4222.29, stdev=254337.68 00:14:22.542 clat percentiles (usec): 00:14:22.542 | 1.00th=[ 1729], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:14:22.542 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:14:22.542 | 70.00th=[ 1991], 80.00th=[ 2008], 90.00th=[ 2057], 95.00th=[ 3097], 00:14:22.542 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7963], 99.95th=[12649], 00:14:22.542 | 99.99th=[13566] 00:14:22.542 bw ( KiB/s): min=36168, max=124048, per=100.00%, avg=118256.95, stdev=15265.52, samples=59 00:14:22.542 iops : min= 9042, max=31012, avg=29564.24, stdev=3816.38, samples=59 00:14:22.542 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(3457MiB/60002msec); 0 zone resets 00:14:22.542 slat (nsec): min=1441, max=197041, avg=5687.89, stdev=1702.29 00:14:22.542 clat (usec): min=701, max=30414k, avg=4442.78, stdev=262613.73 00:14:22.542 lat (usec): min=707, max=30414k, avg=4448.46, stdev=262613.73 00:14:22.542 clat percentiles (usec): 00:14:22.542 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2008], 00:14:22.542 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2073], 00:14:22.542 | 70.00th=[ 2089], 80.00th=[ 2114], 90.00th=[ 2147], 95.00th=[ 3032], 00:14:22.542 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 8094], 99.95th=[12780], 00:14:22.542 | 99.99th=[13698] 00:14:22.542 bw ( KiB/s): min=35736, max=124496, per=100.00%, avg=118085.69, stdev=15291.68, samples=59 00:14:22.542 iops : min= 8934, max=31124, avg=29521.42, stdev=3822.92, samples=59 00:14:22.542 lat (usec) : 750=0.01%, 1000=0.01% 00:14:22.542 lat (msec) : 2=48.19%, 4=48.86%, 10=2.88%, 20=0.06%, >=2000=0.01% 00:14:22.542 cpu : usr=3.12%, sys=16.72%, ctx=58264, majf=0, minf=13 00:14:22.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:22.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.542 issued rwts: total=886348,885002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.542 00:14:22.542 Run status group 0 (all jobs): 00:14:22.542 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=3462MiB (3630MB), run=60002-60002msec 00:14:22.542 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=3457MiB (3625MB), run=60002-60002msec 00:14:22.542 00:14:22.542 Disk stats (read/write): 00:14:22.542 ublkb1: ios=883046/881754, merge=0/0, ticks=3684589/3804812, in_queue=7489402, util=99.89% 00:14:22.542 15:00:42 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:22.542 15:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.542 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.542 [2024-11-18 15:00:42.275117] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.542 [2024-11-18 15:00:42.304353] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.542 [2024-11-18 15:00:42.304589] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.542 [2024-11-18 15:00:42.313354] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.542 [2024-11-18 15:00:42.317427] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:22.542 [2024-11-18 15:00:42.317446] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:22.542 15:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.542 15:00:42 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:22.542 15:00:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.542 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.542 [2024-11-18 15:00:42.328410] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:22.542 [2024-11-18 15:00:42.329332] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:22.542 [2024-11-18 15:00:42.329360] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:22.542 15:00:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.542 15:00:42 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:22.542 15:00:42 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:22.542 15:00:42 -- ublk/ublk_recovery.sh@14 -- # killprocess 81110 00:14:22.542 15:00:42 -- common/autotest_common.sh@936 -- # '[' -z 81110 ']' 00:14:22.542 15:00:42 -- common/autotest_common.sh@940 -- # kill -0 81110 00:14:22.542 15:00:42 -- common/autotest_common.sh@941 -- # uname 00:14:22.543 15:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.543 15:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81110 00:14:22.543 killing process with pid 81110 00:14:22.543 15:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.543 15:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.543 15:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81110' 00:14:22.543 15:00:42 -- common/autotest_common.sh@955 -- # kill 81110 00:14:22.543 15:00:42 -- common/autotest_common.sh@960 -- # wait 81110 00:14:22.543 [2024-11-18 15:00:42.597558] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:22.543 [2024-11-18 15:00:42.597621] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:22.543 00:14:22.543 real 1m3.051s 00:14:22.543 user 1m45.676s 00:14:22.543 sys 0m21.751s 00:14:22.543 15:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:22.543 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.543 ************************************ 00:14:22.543 END TEST ublk_recovery 00:14:22.543 ************************************ 00:14:22.543 15:00:42 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:42 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:22.543 15:00:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.543 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:14:22.543 15:00:43 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:22.543 15:00:43 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:22.543 15:00:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.543 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.543 ************************************ 00:14:22.543 START TEST ftl 00:14:22.543 ************************************ 00:14:22.543 15:00:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:22.543 * Looking for test storage... 00:14:22.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.543 15:00:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:22.543 15:00:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:22.543 15:00:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:22.543 15:00:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:22.543 15:00:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:22.543 15:00:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:22.543 15:00:43 -- scripts/common.sh@335 -- # IFS=.-: 00:14:22.543 15:00:43 -- scripts/common.sh@335 -- # read -ra ver1 00:14:22.543 15:00:43 -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.543 15:00:43 -- scripts/common.sh@336 -- # read -ra ver2 00:14:22.543 15:00:43 -- scripts/common.sh@337 -- # local 'op=<' 00:14:22.543 15:00:43 -- scripts/common.sh@339 -- # ver1_l=2 00:14:22.543 15:00:43 -- scripts/common.sh@340 -- # ver2_l=1 00:14:22.543 15:00:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:22.543 15:00:43 -- scripts/common.sh@343 -- # case "$op" in 00:14:22.543 15:00:43 -- scripts/common.sh@344 -- # : 1 00:14:22.543 15:00:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:22.543 15:00:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.543 15:00:43 -- scripts/common.sh@364 -- # decimal 1 00:14:22.543 15:00:43 -- scripts/common.sh@352 -- # local d=1 00:14:22.543 15:00:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.543 15:00:43 -- scripts/common.sh@354 -- # echo 1 00:14:22.543 15:00:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:22.543 15:00:43 -- scripts/common.sh@365 -- # decimal 2 00:14:22.543 15:00:43 -- scripts/common.sh@352 -- # local d=2 00:14:22.543 15:00:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.543 15:00:43 -- scripts/common.sh@354 -- # echo 2 00:14:22.543 15:00:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:22.543 15:00:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:22.543 15:00:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:22.543 15:00:43 -- scripts/common.sh@367 -- # return 0 00:14:22.543 15:00:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.543 --rc genhtml_branch_coverage=1 00:14:22.543 --rc genhtml_function_coverage=1 00:14:22.543 --rc genhtml_legend=1 00:14:22.543 --rc geninfo_all_blocks=1 00:14:22.543 --rc geninfo_unexecuted_blocks=1 00:14:22.543 00:14:22.543 ' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.543 --rc genhtml_branch_coverage=1 00:14:22.543 --rc genhtml_function_coverage=1 00:14:22.543 --rc genhtml_legend=1 00:14:22.543 --rc geninfo_all_blocks=1 00:14:22.543 --rc geninfo_unexecuted_blocks=1 00:14:22.543 00:14:22.543 ' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.543 --rc genhtml_branch_coverage=1 00:14:22.543 --rc genhtml_function_coverage=1 00:14:22.543 --rc genhtml_legend=1 00:14:22.543 --rc geninfo_all_blocks=1 00:14:22.543 --rc geninfo_unexecuted_blocks=1 00:14:22.543 00:14:22.543 ' 00:14:22.543 15:00:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.543 --rc genhtml_branch_coverage=1 00:14:22.543 --rc genhtml_function_coverage=1 00:14:22.543 --rc genhtml_legend=1 00:14:22.543 --rc geninfo_all_blocks=1 00:14:22.543 --rc geninfo_unexecuted_blocks=1 00:14:22.543 00:14:22.543 ' 00:14:22.543 15:00:43 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:22.543 15:00:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:22.543 15:00:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.543 15:00:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.543 15:00:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:22.543 15:00:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:22.543 15:00:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.543 15:00:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:22.543 15:00:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:22.543 15:00:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.543 15:00:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.543 15:00:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:22.543 15:00:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:22.543 15:00:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:22.543 15:00:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:22.543 15:00:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:22.543 15:00:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:22.543 15:00:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.543 15:00:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.543 15:00:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:22.543 15:00:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:22.543 15:00:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:22.543 15:00:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:22.543 15:00:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:22.543 15:00:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:22.543 15:00:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:22.543 15:00:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:22.543 15:00:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:22.543 15:00:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:22.543 15:00:43 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.543 15:00:43 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:22.543 15:00:43 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:22.543 15:00:43 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:22.543 15:00:43 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:22.543 15:00:43 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:22.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.543 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.543 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.543 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.543 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.543 15:00:43 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=81912 00:14:22.543 15:00:43 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:22.543 15:00:43 -- ftl/ftl.sh@38 -- # waitforlisten 81912 00:14:22.543 15:00:43 -- common/autotest_common.sh@829 -- # '[' -z 81912 ']' 00:14:22.543 15:00:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.543 15:00:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.543 15:00:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.543 15:00:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.543 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:14:22.544 [2024-11-18 15:00:43.709942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:22.544 [2024-11-18 15:00:43.710592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81912 ] 00:14:22.544 [2024-11-18 15:00:43.859193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.544 [2024-11-18 15:00:43.890436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:22.544 [2024-11-18 15:00:43.890633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.544 15:00:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.544 15:00:44 -- common/autotest_common.sh@862 -- # return 0 00:14:22.544 15:00:44 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:22.544 15:00:44 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:22.544 15:00:45 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:22.544 15:00:45 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:22.544 15:00:45 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:22.544 15:00:45 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:22.544 15:00:45 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:22.544 15:00:45 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:22.544 15:00:45 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:22.544 15:00:45 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:22.544 15:00:45 -- ftl/ftl.sh@50 -- # break 00:14:22.544 15:00:45 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:22.544 15:00:45 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:22.544 15:00:45 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:22.544 15:00:45 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:22.544 15:00:45 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:22.544 15:00:45 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:22.544 15:00:45 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:22.544 15:00:45 -- ftl/ftl.sh@63 -- # break 00:14:22.544 15:00:45 -- ftl/ftl.sh@66 -- # killprocess 81912 00:14:22.544 15:00:45 -- common/autotest_common.sh@936 -- # '[' -z 81912 ']' 00:14:22.544 15:00:45 -- common/autotest_common.sh@940 -- # kill -0 81912 00:14:22.544 15:00:45 -- common/autotest_common.sh@941 -- # uname 00:14:22.544 15:00:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.544 15:00:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81912 00:14:22.544 killing process with pid 81912 00:14:22.544 15:00:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.544 15:00:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.544 15:00:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81912' 00:14:22.544 15:00:45 -- common/autotest_common.sh@955 -- # kill 81912 00:14:22.544 15:00:45 -- common/autotest_common.sh@960 -- # wait 81912 00:14:22.802 15:00:46 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:22.803 15:00:46 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:22.803 15:00:46 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:22.803 15:00:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.803 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:22.803 ************************************ 00:14:22.803 START TEST ftl_fio_basic 00:14:22.803 ************************************ 00:14:22.803 15:00:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:22.803 * Looking for test storage... 00:14:22.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.803 15:00:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:22.803 15:00:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:22.803 15:00:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:22.803 15:00:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:22.803 15:00:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:22.803 15:00:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:22.803 15:00:46 -- scripts/common.sh@335 -- # IFS=.-: 00:14:22.803 15:00:46 -- scripts/common.sh@335 -- # read -ra ver1 00:14:22.803 15:00:46 -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.803 15:00:46 -- scripts/common.sh@336 -- # read -ra ver2 00:14:22.803 15:00:46 -- scripts/common.sh@337 -- # local 'op=<' 00:14:22.803 15:00:46 -- scripts/common.sh@339 -- # ver1_l=2 00:14:22.803 15:00:46 -- scripts/common.sh@340 -- # ver2_l=1 00:14:22.803 15:00:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:22.803 15:00:46 -- scripts/common.sh@343 -- # case "$op" in 00:14:22.803 15:00:46 -- scripts/common.sh@344 -- # : 1 00:14:22.803 15:00:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:22.803 15:00:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.803 15:00:46 -- scripts/common.sh@364 -- # decimal 1 00:14:22.803 15:00:46 -- scripts/common.sh@352 -- # local d=1 00:14:22.803 15:00:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.803 15:00:46 -- scripts/common.sh@354 -- # echo 1 00:14:22.803 15:00:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:22.803 15:00:46 -- scripts/common.sh@365 -- # decimal 2 00:14:22.803 15:00:46 -- scripts/common.sh@352 -- # local d=2 00:14:22.803 15:00:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.803 15:00:46 -- scripts/common.sh@354 -- # echo 2 00:14:22.803 15:00:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:22.803 15:00:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:22.803 15:00:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:22.803 15:00:46 -- scripts/common.sh@367 -- # return 0 00:14:22.803 15:00:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.803 --rc genhtml_branch_coverage=1 00:14:22.803 --rc genhtml_function_coverage=1 00:14:22.803 --rc genhtml_legend=1 00:14:22.803 --rc geninfo_all_blocks=1 00:14:22.803 --rc geninfo_unexecuted_blocks=1 00:14:22.803 00:14:22.803 ' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.803 --rc genhtml_branch_coverage=1 00:14:22.803 --rc genhtml_function_coverage=1 00:14:22.803 --rc genhtml_legend=1 00:14:22.803 --rc geninfo_all_blocks=1 00:14:22.803 --rc geninfo_unexecuted_blocks=1 00:14:22.803 00:14:22.803 ' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.803 --rc genhtml_branch_coverage=1 00:14:22.803 --rc genhtml_function_coverage=1 00:14:22.803 --rc genhtml_legend=1 00:14:22.803 --rc geninfo_all_blocks=1 00:14:22.803 --rc geninfo_unexecuted_blocks=1 00:14:22.803 00:14:22.803 ' 00:14:22.803 15:00:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.803 --rc genhtml_branch_coverage=1 00:14:22.803 --rc genhtml_function_coverage=1 00:14:22.803 --rc genhtml_legend=1 00:14:22.803 --rc geninfo_all_blocks=1 00:14:22.803 --rc geninfo_unexecuted_blocks=1 00:14:22.803 00:14:22.803 ' 00:14:22.803 15:00:46 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:22.803 15:00:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:22.803 15:00:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.803 15:00:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:22.803 15:00:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:22.803 15:00:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:22.803 15:00:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.803 15:00:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:22.803 15:00:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:22.803 15:00:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.803 15:00:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.803 15:00:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:22.803 15:00:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:22.803 15:00:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:22.803 15:00:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:22.803 15:00:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:22.803 15:00:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:22.803 15:00:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.803 15:00:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.803 15:00:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:22.803 15:00:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:22.803 15:00:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:22.803 15:00:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:22.803 15:00:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:22.803 15:00:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:22.803 15:00:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:22.803 15:00:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:22.803 15:00:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:22.803 15:00:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:22.803 15:00:46 -- ftl/fio.sh@11 -- # declare -A suite 00:14:22.803 15:00:46 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:22.803 15:00:46 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:22.803 15:00:46 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:22.803 15:00:46 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.803 15:00:46 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:22.803 15:00:46 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:22.803 15:00:46 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:22.803 15:00:46 -- ftl/fio.sh@26 -- # uuid= 00:14:22.803 15:00:46 -- ftl/fio.sh@27 -- # timeout=240 00:14:22.803 15:00:46 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:22.803 15:00:46 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:22.803 15:00:46 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:22.803 15:00:46 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:22.803 15:00:46 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:22.803 15:00:46 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:22.803 15:00:46 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:22.803 15:00:46 -- ftl/fio.sh@45 -- # svcpid=82027 00:14:22.803 15:00:46 -- ftl/fio.sh@46 -- # waitforlisten 82027 00:14:22.803 15:00:46 -- common/autotest_common.sh@829 -- # '[' -z 82027 ']' 00:14:22.803 15:00:46 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:22.803 15:00:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.803 15:00:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.804 15:00:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.804 15:00:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.804 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:14:23.062 [2024-11-18 15:00:46.418159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:23.062 [2024-11-18 15:00:46.418473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82027 ] 00:14:23.062 [2024-11-18 15:00:46.565626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:23.062 [2024-11-18 15:00:46.605094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.062 [2024-11-18 15:00:46.605727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.062 [2024-11-18 15:00:46.605938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.062 [2024-11-18 15:00:46.605991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.628 15:00:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.628 15:00:47 -- common/autotest_common.sh@862 -- # return 0 00:14:23.628 15:00:47 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:23.628 15:00:47 -- ftl/common.sh@54 -- # local name=nvme0 00:14:23.628 15:00:47 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:23.628 15:00:47 -- ftl/common.sh@56 -- # local size=103424 00:14:23.628 15:00:47 -- ftl/common.sh@59 -- # local base_bdev 00:14:23.628 15:00:47 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:23.887 15:00:47 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:23.887 15:00:47 -- ftl/common.sh@62 -- # local base_size 00:14:23.887 15:00:47 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:23.887 15:00:47 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:23.887 15:00:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:23.887 15:00:47 -- common/autotest_common.sh@1369 -- # local bs 00:14:23.887 15:00:47 -- common/autotest_common.sh@1370 -- # local nb 00:14:23.887 15:00:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:24.146 15:00:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:24.146 { 00:14:24.146 "name": "nvme0n1", 00:14:24.146 "aliases": [ 00:14:24.146 "cfaf75ab-b7bd-41b3-a724-5bd3a6aa1a43" 00:14:24.146 ], 00:14:24.146 "product_name": "NVMe disk", 00:14:24.146 "block_size": 4096, 00:14:24.146 "num_blocks": 1310720, 00:14:24.146 "uuid": "cfaf75ab-b7bd-41b3-a724-5bd3a6aa1a43", 00:14:24.146 "assigned_rate_limits": { 00:14:24.146 "rw_ios_per_sec": 0, 00:14:24.146 "rw_mbytes_per_sec": 0, 00:14:24.146 "r_mbytes_per_sec": 0, 00:14:24.146 "w_mbytes_per_sec": 0 00:14:24.146 }, 00:14:24.146 "claimed": false, 00:14:24.146 "zoned": false, 00:14:24.146 "supported_io_types": { 00:14:24.146 "read": true, 00:14:24.146 "write": true, 00:14:24.146 "unmap": true, 00:14:24.146 "write_zeroes": true, 00:14:24.146 "flush": true, 00:14:24.146 "reset": true, 00:14:24.146 "compare": true, 00:14:24.146 "compare_and_write": false, 00:14:24.146 "abort": true, 00:14:24.146 "nvme_admin": true, 00:14:24.146 "nvme_io": true 00:14:24.146 }, 00:14:24.146 "driver_specific": { 00:14:24.146 "nvme": [ 00:14:24.146 { 00:14:24.146 "pci_address": "0000:00:07.0", 00:14:24.146 "trid": { 00:14:24.146 "trtype": "PCIe", 00:14:24.146 "traddr": "0000:00:07.0" 00:14:24.146 }, 00:14:24.146 "ctrlr_data": { 00:14:24.146 "cntlid": 0, 00:14:24.146 "vendor_id": "0x1b36", 00:14:24.146 "model_number": "QEMU NVMe Ctrl", 00:14:24.146 "serial_number": "12341", 00:14:24.146 "firmware_revision": "8.0.0", 00:14:24.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:24.146 "oacs": { 00:14:24.146 "security": 0, 00:14:24.146 "format": 1, 00:14:24.146 "firmware": 0, 00:14:24.146 "ns_manage": 1 00:14:24.146 }, 00:14:24.146 "multi_ctrlr": false, 00:14:24.146 "ana_reporting": false 00:14:24.146 }, 00:14:24.146 "vs": { 00:14:24.146 "nvme_version": "1.4" 00:14:24.146 }, 00:14:24.146 "ns_data": { 00:14:24.146 "id": 1, 00:14:24.146 "can_share": false 00:14:24.146 } 00:14:24.146 } 00:14:24.146 ], 00:14:24.146 "mp_policy": "active_passive" 00:14:24.146 } 00:14:24.146 } 00:14:24.146 ]' 00:14:24.146 15:00:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:24.146 15:00:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:24.146 15:00:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:24.146 15:00:47 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:24.146 15:00:47 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:24.146 15:00:47 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:24.146 15:00:47 -- ftl/common.sh@63 -- # base_size=5120 00:14:24.146 15:00:47 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:24.146 15:00:47 -- ftl/common.sh@67 -- # clear_lvols 00:14:24.146 15:00:47 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:24.146 15:00:47 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:24.404 15:00:47 -- ftl/common.sh@28 -- # stores= 00:14:24.404 15:00:47 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:24.663 15:00:48 -- ftl/common.sh@68 -- # lvs=49244a58-e601-4689-99fb-0feaad60461d 00:14:24.663 15:00:48 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49244a58-e601-4689-99fb-0feaad60461d 00:14:24.921 15:00:48 -- ftl/fio.sh@48 -- # split_bdev=8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- ftl/common.sh@35 -- # local name=nvc0 00:14:24.921 15:00:48 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:24.921 15:00:48 -- ftl/common.sh@37 -- # local base_bdev=8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- ftl/common.sh@38 -- # local cache_size= 00:14:24.921 15:00:48 -- ftl/common.sh@41 -- # get_bdev_size 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- common/autotest_common.sh@1367 -- # local bdev_name=8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:24.921 15:00:48 -- common/autotest_common.sh@1369 -- # local bs 00:14:24.921 15:00:48 -- common/autotest_common.sh@1370 -- # local nb 00:14:24.921 15:00:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:24.921 15:00:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:24.921 { 00:14:24.921 "name": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:24.921 "aliases": [ 00:14:24.921 "lvs/nvme0n1p0" 00:14:24.921 ], 00:14:24.921 "product_name": "Logical Volume", 00:14:24.921 "block_size": 4096, 00:14:24.921 "num_blocks": 26476544, 00:14:24.921 "uuid": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:24.921 "assigned_rate_limits": { 00:14:24.921 "rw_ios_per_sec": 0, 00:14:24.921 "rw_mbytes_per_sec": 0, 00:14:24.921 "r_mbytes_per_sec": 0, 00:14:24.921 "w_mbytes_per_sec": 0 00:14:24.921 }, 00:14:24.921 "claimed": false, 00:14:24.921 "zoned": false, 00:14:24.921 "supported_io_types": { 00:14:24.921 "read": true, 00:14:24.921 "write": true, 00:14:24.921 "unmap": true, 00:14:24.921 "write_zeroes": true, 00:14:24.921 "flush": false, 00:14:24.921 "reset": true, 00:14:24.921 "compare": false, 00:14:24.921 "compare_and_write": false, 00:14:24.921 "abort": false, 00:14:24.921 "nvme_admin": false, 00:14:24.921 "nvme_io": false 00:14:24.921 }, 00:14:24.921 "driver_specific": { 00:14:24.921 "lvol": { 00:14:24.921 "lvol_store_uuid": "49244a58-e601-4689-99fb-0feaad60461d", 00:14:24.921 "base_bdev": "nvme0n1", 00:14:24.921 "thin_provision": true, 00:14:24.921 "snapshot": false, 00:14:24.921 "clone": false, 00:14:24.921 "esnap_clone": false 00:14:24.921 } 00:14:24.921 } 00:14:24.921 } 00:14:24.921 ]' 00:14:24.921 15:00:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:25.180 15:00:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:25.180 15:00:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:25.180 15:00:48 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:25.180 15:00:48 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:25.180 15:00:48 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:25.180 15:00:48 -- ftl/common.sh@41 -- # local base_size=5171 00:14:25.180 15:00:48 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:25.180 15:00:48 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:25.439 15:00:48 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:25.439 15:00:48 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:25.439 15:00:48 -- ftl/common.sh@48 -- # get_bdev_size 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.439 15:00:48 -- common/autotest_common.sh@1367 -- # local bdev_name=8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.439 15:00:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:25.439 15:00:48 -- common/autotest_common.sh@1369 -- # local bs 00:14:25.439 15:00:48 -- common/autotest_common.sh@1370 -- # local nb 00:14:25.439 15:00:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.439 15:00:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:25.439 { 00:14:25.439 "name": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:25.439 "aliases": [ 00:14:25.439 "lvs/nvme0n1p0" 00:14:25.439 ], 00:14:25.439 "product_name": "Logical Volume", 00:14:25.439 "block_size": 4096, 00:14:25.439 "num_blocks": 26476544, 00:14:25.439 "uuid": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:25.439 "assigned_rate_limits": { 00:14:25.439 "rw_ios_per_sec": 0, 00:14:25.439 "rw_mbytes_per_sec": 0, 00:14:25.439 "r_mbytes_per_sec": 0, 00:14:25.439 "w_mbytes_per_sec": 0 00:14:25.439 }, 00:14:25.439 "claimed": false, 00:14:25.439 "zoned": false, 00:14:25.439 "supported_io_types": { 00:14:25.439 "read": true, 00:14:25.439 "write": true, 00:14:25.439 "unmap": true, 00:14:25.439 "write_zeroes": true, 00:14:25.439 "flush": false, 00:14:25.439 "reset": true, 00:14:25.439 "compare": false, 00:14:25.439 "compare_and_write": false, 00:14:25.439 "abort": false, 00:14:25.439 "nvme_admin": false, 00:14:25.439 "nvme_io": false 00:14:25.439 }, 00:14:25.439 "driver_specific": { 00:14:25.439 "lvol": { 00:14:25.439 "lvol_store_uuid": "49244a58-e601-4689-99fb-0feaad60461d", 00:14:25.439 "base_bdev": "nvme0n1", 00:14:25.439 "thin_provision": true, 00:14:25.439 "snapshot": false, 00:14:25.439 "clone": false, 00:14:25.439 "esnap_clone": false 00:14:25.439 } 00:14:25.439 } 00:14:25.439 } 00:14:25.439 ]' 00:14:25.439 15:00:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:25.439 15:00:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:25.439 15:00:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:25.697 15:00:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:25.697 15:00:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:25.697 15:00:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:25.697 15:00:49 -- ftl/common.sh@48 -- # cache_size=5171 00:14:25.697 15:00:49 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:25.697 15:00:49 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:25.697 15:00:49 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:25.697 15:00:49 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:25.697 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:25.697 15:00:49 -- ftl/fio.sh@56 -- # get_bdev_size 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.697 15:00:49 -- common/autotest_common.sh@1367 -- # local bdev_name=8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.697 15:00:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:25.697 15:00:49 -- common/autotest_common.sh@1369 -- # local bs 00:14:25.697 15:00:49 -- common/autotest_common.sh@1370 -- # local nb 00:14:25.697 15:00:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc 00:14:25.955 15:00:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:25.955 { 00:14:25.955 "name": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:25.955 "aliases": [ 00:14:25.955 "lvs/nvme0n1p0" 00:14:25.955 ], 00:14:25.955 "product_name": "Logical Volume", 00:14:25.955 "block_size": 4096, 00:14:25.955 "num_blocks": 26476544, 00:14:25.955 "uuid": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:25.955 "assigned_rate_limits": { 00:14:25.955 "rw_ios_per_sec": 0, 00:14:25.955 "rw_mbytes_per_sec": 0, 00:14:25.955 "r_mbytes_per_sec": 0, 00:14:25.955 "w_mbytes_per_sec": 0 00:14:25.955 }, 00:14:25.955 "claimed": false, 00:14:25.955 "zoned": false, 00:14:25.955 "supported_io_types": { 00:14:25.955 "read": true, 00:14:25.955 "write": true, 00:14:25.955 "unmap": true, 00:14:25.955 "write_zeroes": true, 00:14:25.955 "flush": false, 00:14:25.955 "reset": true, 00:14:25.955 "compare": false, 00:14:25.955 "compare_and_write": false, 00:14:25.955 "abort": false, 00:14:25.955 "nvme_admin": false, 00:14:25.955 "nvme_io": false 00:14:25.955 }, 00:14:25.955 "driver_specific": { 00:14:25.955 "lvol": { 00:14:25.955 "lvol_store_uuid": "49244a58-e601-4689-99fb-0feaad60461d", 00:14:25.955 "base_bdev": "nvme0n1", 00:14:25.955 "thin_provision": true, 00:14:25.955 "snapshot": false, 00:14:25.955 "clone": false, 00:14:25.955 "esnap_clone": false 00:14:25.955 } 00:14:25.955 } 00:14:25.955 } 00:14:25.955 ]' 00:14:25.955 15:00:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:25.955 15:00:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:25.955 15:00:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:25.955 15:00:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:25.955 15:00:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:25.955 15:00:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:25.955 15:00:49 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:25.955 15:00:49 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:25.955 15:00:49 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc -c nvc0n1p0 --l2p_dram_limit 60 00:14:26.215 [2024-11-18 15:00:49.656357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.656485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:26.215 [2024-11-18 15:00:49.656505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:26.215 [2024-11-18 15:00:49.656513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.656588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.656597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:26.215 [2024-11-18 15:00:49.656610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:14:26.215 [2024-11-18 15:00:49.656616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.656649] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:26.215 [2024-11-18 15:00:49.656876] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:26.215 [2024-11-18 15:00:49.656889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.656904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:26.215 [2024-11-18 15:00:49.656912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:14:26.215 [2024-11-18 15:00:49.656918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.656975] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a28d4cb2-f855-434c-9a44-49085259845e 00:14:26.215 [2024-11-18 15:00:49.658245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.658278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:26.215 [2024-11-18 15:00:49.658286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:14:26.215 [2024-11-18 15:00:49.658296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.664948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.664978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:26.215 [2024-11-18 15:00:49.664986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:14:26.215 [2024-11-18 15:00:49.664995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.665083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.665093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:26.215 [2024-11-18 15:00:49.665099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:14:26.215 [2024-11-18 15:00:49.665107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.665163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.665175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:26.215 [2024-11-18 15:00:49.665183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:26.215 [2024-11-18 15:00:49.665190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.665214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:26.215 [2024-11-18 15:00:49.666836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.666860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:26.215 [2024-11-18 15:00:49.666877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:14:26.215 [2024-11-18 15:00:49.666884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.666928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.666935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:26.215 [2024-11-18 15:00:49.666945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:26.215 [2024-11-18 15:00:49.666951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.666976] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:26.215 [2024-11-18 15:00:49.667089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:26.215 [2024-11-18 15:00:49.667112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:26.215 [2024-11-18 15:00:49.667120] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:26.215 [2024-11-18 15:00:49.667130] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:26.215 [2024-11-18 15:00:49.667138] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:26.215 [2024-11-18 15:00:49.667146] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:26.215 [2024-11-18 15:00:49.667158] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:26.215 [2024-11-18 15:00:49.667166] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:26.215 [2024-11-18 15:00:49.667172] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:26.215 [2024-11-18 15:00:49.667179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.667185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:26.215 [2024-11-18 15:00:49.667192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:14:26.215 [2024-11-18 15:00:49.667200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.667255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.215 [2024-11-18 15:00:49.667265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:26.215 [2024-11-18 15:00:49.667273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:26.215 [2024-11-18 15:00:49.667287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.215 [2024-11-18 15:00:49.667367] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:26.215 [2024-11-18 15:00:49.667376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:26.215 [2024-11-18 15:00:49.667385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:26.215 [2024-11-18 15:00:49.667391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.215 [2024-11-18 15:00:49.667400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:26.215 [2024-11-18 15:00:49.667405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:26.215 [2024-11-18 15:00:49.667413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:26.216 [2024-11-18 15:00:49.667425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:26.216 [2024-11-18 15:00:49.667437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:26.216 [2024-11-18 15:00:49.667441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:26.216 [2024-11-18 15:00:49.667450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:26.216 [2024-11-18 15:00:49.667455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:26.216 [2024-11-18 15:00:49.667462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:26.216 [2024-11-18 15:00:49.667467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:26.216 [2024-11-18 15:00:49.667478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:26.216 [2024-11-18 15:00:49.667488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:26.216 [2024-11-18 15:00:49.667500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:26.216 [2024-11-18 15:00:49.667517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:26.216 [2024-11-18 15:00:49.667530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:26.216 [2024-11-18 15:00:49.667549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:26.216 [2024-11-18 15:00:49.667568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:26.216 [2024-11-18 15:00:49.667586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:26.216 [2024-11-18 15:00:49.667602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:26.216 [2024-11-18 15:00:49.667614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:26.216 [2024-11-18 15:00:49.667621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:26.216 [2024-11-18 15:00:49.667625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:26.216 [2024-11-18 15:00:49.667631] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:26.216 [2024-11-18 15:00:49.667638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:26.216 [2024-11-18 15:00:49.667645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:26.216 [2024-11-18 15:00:49.667659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:26.216 [2024-11-18 15:00:49.667664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:26.216 [2024-11-18 15:00:49.667671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:26.216 [2024-11-18 15:00:49.667677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:26.216 [2024-11-18 15:00:49.667684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:26.216 [2024-11-18 15:00:49.667689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:26.216 [2024-11-18 15:00:49.667699] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:26.216 [2024-11-18 15:00:49.667707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:26.216 [2024-11-18 15:00:49.667715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:26.216 [2024-11-18 15:00:49.667721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:26.216 [2024-11-18 15:00:49.667728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:26.216 [2024-11-18 15:00:49.667733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:26.216 [2024-11-18 15:00:49.667742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:26.216 [2024-11-18 15:00:49.667748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:26.216 [2024-11-18 15:00:49.667754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:26.216 [2024-11-18 15:00:49.667759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:26.216 [2024-11-18 15:00:49.667768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:26.216 [2024-11-18 15:00:49.667773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:26.216 [2024-11-18 15:00:49.667780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:26.216 [2024-11-18 15:00:49.667785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:26.216 [2024-11-18 15:00:49.667793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:26.216 [2024-11-18 15:00:49.667798] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:26.216 [2024-11-18 15:00:49.667805] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:26.216 [2024-11-18 15:00:49.667812] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:26.216 [2024-11-18 15:00:49.667818] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:26.216 [2024-11-18 15:00:49.667824] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:26.216 [2024-11-18 15:00:49.667831] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:26.216 [2024-11-18 15:00:49.667836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.667843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:26.216 [2024-11-18 15:00:49.667852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:14:26.216 [2024-11-18 15:00:49.667858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.674893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.674935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:26.216 [2024-11-18 15:00:49.674943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.980 ms 00:14:26.216 [2024-11-18 15:00:49.674959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.675029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.675038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:26.216 [2024-11-18 15:00:49.675045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:14:26.216 [2024-11-18 15:00:49.675054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.685422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.685449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:26.216 [2024-11-18 15:00:49.685457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.330 ms 00:14:26.216 [2024-11-18 15:00:49.685466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.685511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.685519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:26.216 [2024-11-18 15:00:49.685528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:26.216 [2024-11-18 15:00:49.685535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.685929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.685966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:26.216 [2024-11-18 15:00:49.685973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:14:26.216 [2024-11-18 15:00:49.685981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.686083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.686102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:26.216 [2024-11-18 15:00:49.686110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:14:26.216 [2024-11-18 15:00:49.686122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.701753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.701877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:26.216 [2024-11-18 15:00:49.701891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:14:26.216 [2024-11-18 15:00:49.701900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.216 [2024-11-18 15:00:49.714326] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:26.216 [2024-11-18 15:00:49.729596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.216 [2024-11-18 15:00:49.729710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:26.217 [2024-11-18 15:00:49.729755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.605 ms 00:14:26.217 [2024-11-18 15:00:49.729773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.217 [2024-11-18 15:00:49.774406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:26.217 [2024-11-18 15:00:49.774505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:26.217 [2024-11-18 15:00:49.774548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.583 ms 00:14:26.217 [2024-11-18 15:00:49.774581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:26.217 [2024-11-18 15:00:49.774665] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:26.217 [2024-11-18 15:00:49.774697] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:28.748 [2024-11-18 15:00:52.007978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.008188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:28.748 [2024-11-18 15:00:52.008276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2233.296 ms 00:14:28.748 [2024-11-18 15:00:52.008306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.008543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.008584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:28.748 [2024-11-18 15:00:52.008658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:14:28.748 [2024-11-18 15:00:52.008697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.011932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.012059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:28.748 [2024-11-18 15:00:52.012137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:14:28.748 [2024-11-18 15:00:52.012163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.014535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.014659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:28.748 [2024-11-18 15:00:52.014759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:14:28.748 [2024-11-18 15:00:52.014876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.015082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.015122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:28.748 [2024-11-18 15:00:52.015191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:14:28.748 [2024-11-18 15:00:52.015266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.035485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.035605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:28.748 [2024-11-18 15:00:52.035677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.153 ms 00:14:28.748 [2024-11-18 15:00:52.035772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.039779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.039897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:28.748 [2024-11-18 15:00:52.039975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:14:28.748 [2024-11-18 15:00:52.040056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.043819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.043928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:28.748 [2024-11-18 15:00:52.043995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:14:28.748 [2024-11-18 15:00:52.044022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.047258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.047380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:28.748 [2024-11-18 15:00:52.047449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:14:28.748 [2024-11-18 15:00:52.047489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.047559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.047638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:28.748 [2024-11-18 15:00:52.047670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:28.748 [2024-11-18 15:00:52.047690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.047795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:28.748 [2024-11-18 15:00:52.047831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:28.748 [2024-11-18 15:00:52.047857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:14:28.748 [2024-11-18 15:00:52.047877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:28.748 [2024-11-18 15:00:52.048979] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2392.167 ms, result 0 00:14:28.748 { 00:14:28.748 "name": "ftl0", 00:14:28.748 "uuid": "a28d4cb2-f855-434c-9a44-49085259845e" 00:14:28.748 } 00:14:28.748 15:00:52 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:28.749 15:00:52 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:28.749 15:00:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.749 15:00:52 -- common/autotest_common.sh@899 -- # local i 00:14:28.749 15:00:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.749 15:00:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.749 15:00:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.749 15:00:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:29.007 [ 00:14:29.007 { 00:14:29.007 "name": "ftl0", 00:14:29.007 "aliases": [ 00:14:29.007 "a28d4cb2-f855-434c-9a44-49085259845e" 00:14:29.007 ], 00:14:29.007 "product_name": "FTL disk", 00:14:29.007 "block_size": 4096, 00:14:29.007 "num_blocks": 20971520, 00:14:29.007 "uuid": "a28d4cb2-f855-434c-9a44-49085259845e", 00:14:29.007 "assigned_rate_limits": { 00:14:29.007 "rw_ios_per_sec": 0, 00:14:29.007 "rw_mbytes_per_sec": 0, 00:14:29.007 "r_mbytes_per_sec": 0, 00:14:29.007 "w_mbytes_per_sec": 0 00:14:29.007 }, 00:14:29.007 "claimed": false, 00:14:29.007 "zoned": false, 00:14:29.007 "supported_io_types": { 00:14:29.007 "read": true, 00:14:29.007 "write": true, 00:14:29.007 "unmap": true, 00:14:29.007 "write_zeroes": true, 00:14:29.007 "flush": true, 00:14:29.007 "reset": false, 00:14:29.007 "compare": false, 00:14:29.007 "compare_and_write": false, 00:14:29.007 "abort": false, 00:14:29.007 "nvme_admin": false, 00:14:29.007 "nvme_io": false 00:14:29.007 }, 00:14:29.007 "driver_specific": { 00:14:29.007 "ftl": { 00:14:29.008 "base_bdev": "8dc1b510-a575-42b1-9a0b-6d0b1cc4d6bc", 00:14:29.008 "cache": "nvc0n1p0" 00:14:29.008 } 00:14:29.008 } 00:14:29.008 } 00:14:29.008 ] 00:14:29.008 15:00:52 -- common/autotest_common.sh@905 -- # return 0 00:14:29.008 15:00:52 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:29.008 15:00:52 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:29.266 15:00:52 -- ftl/fio.sh@70 -- # echo ']}' 00:14:29.266 15:00:52 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:29.526 [2024-11-18 15:00:52.858230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.858296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:29.526 [2024-11-18 15:00:52.858313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:29.526 [2024-11-18 15:00:52.858340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.858386] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:29.526 [2024-11-18 15:00:52.858994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.859025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:29.526 [2024-11-18 15:00:52.859049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:14:29.526 [2024-11-18 15:00:52.859058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.859579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.859596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:29.526 [2024-11-18 15:00:52.859614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:14:29.526 [2024-11-18 15:00:52.859621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.862891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.862913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:29.526 [2024-11-18 15:00:52.862926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:14:29.526 [2024-11-18 15:00:52.862934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.869145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.869183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:29.526 [2024-11-18 15:00:52.869195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:14:29.526 [2024-11-18 15:00:52.869205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.871549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.871584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:29.526 [2024-11-18 15:00:52.871610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.241 ms 00:14:29.526 [2024-11-18 15:00:52.871618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.875639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.875670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:29.526 [2024-11-18 15:00:52.875682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:14:29.526 [2024-11-18 15:00:52.875690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.875869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.875879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:29.526 [2024-11-18 15:00:52.875893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:14:29.526 [2024-11-18 15:00:52.875900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.877388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.877552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:29.526 [2024-11-18 15:00:52.877570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:14:29.526 [2024-11-18 15:00:52.877578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.526 [2024-11-18 15:00:52.878723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.526 [2024-11-18 15:00:52.878751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:29.526 [2024-11-18 15:00:52.878761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:14:29.527 [2024-11-18 15:00:52.878768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.527 [2024-11-18 15:00:52.879616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.527 [2024-11-18 15:00:52.879644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:29.527 [2024-11-18 15:00:52.879655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:14:29.527 [2024-11-18 15:00:52.879661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.527 [2024-11-18 15:00:52.880547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.527 [2024-11-18 15:00:52.880576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:29.527 [2024-11-18 15:00:52.880587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:14:29.527 [2024-11-18 15:00:52.880594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.527 [2024-11-18 15:00:52.880635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:29.527 [2024-11-18 15:00:52.880651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.880993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:29.527 [2024-11-18 15:00:52.881593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.881960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:29.528 [2024-11-18 15:00:52.882462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:29.528 [2024-11-18 15:00:52.882474] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a28d4cb2-f855-434c-9a44-49085259845e 00:14:29.528 [2024-11-18 15:00:52.882482] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:29.528 [2024-11-18 15:00:52.882491] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:29.528 [2024-11-18 15:00:52.882499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:29.528 [2024-11-18 15:00:52.882508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:29.528 [2024-11-18 15:00:52.882517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:29.528 [2024-11-18 15:00:52.882528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:29.528 [2024-11-18 15:00:52.882536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:29.528 [2024-11-18 15:00:52.882545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:29.528 [2024-11-18 15:00:52.882551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:29.528 [2024-11-18 15:00:52.882569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.528 [2024-11-18 15:00:52.882577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:29.528 [2024-11-18 15:00:52.882587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:14:29.528 [2024-11-18 15:00:52.882595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.884567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.528 [2024-11-18 15:00:52.884588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:29.528 [2024-11-18 15:00:52.884603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:14:29.528 [2024-11-18 15:00:52.884612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.884694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:29.528 [2024-11-18 15:00:52.884705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:29.528 [2024-11-18 15:00:52.884716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:14:29.528 [2024-11-18 15:00:52.884724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.891396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.891506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:29.528 [2024-11-18 15:00:52.891590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.891646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.891723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.891787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:29.528 [2024-11-18 15:00:52.891836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.891858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.891958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.891992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:29.528 [2024-11-18 15:00:52.892015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.892067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.892162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.892193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:29.528 [2024-11-18 15:00:52.892527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.892549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.905034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.905186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:29.528 [2024-11-18 15:00:52.905244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.905267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.910018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.910129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:29.528 [2024-11-18 15:00:52.910181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.910218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.910362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.910424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:29.528 [2024-11-18 15:00:52.910474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.910496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.910589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.910739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:29.528 [2024-11-18 15:00:52.910767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.910787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.910929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.910995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:29.528 [2024-11-18 15:00:52.911041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.911064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.911139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.911166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:29.528 [2024-11-18 15:00:52.911200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.911245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.911329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.911340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:29.528 [2024-11-18 15:00:52.911351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.911359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.911419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:29.528 [2024-11-18 15:00:52.911430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:29.528 [2024-11-18 15:00:52.911440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:29.528 [2024-11-18 15:00:52.911447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:29.528 [2024-11-18 15:00:52.911639] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.359 ms, result 0 00:14:29.528 true 00:14:29.528 15:00:52 -- ftl/fio.sh@75 -- # killprocess 82027 00:14:29.528 15:00:52 -- common/autotest_common.sh@936 -- # '[' -z 82027 ']' 00:14:29.528 15:00:52 -- common/autotest_common.sh@940 -- # kill -0 82027 00:14:29.528 15:00:52 -- common/autotest_common.sh@941 -- # uname 00:14:29.528 15:00:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.528 15:00:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82027 00:14:29.528 killing process with pid 82027 00:14:29.528 15:00:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.528 15:00:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.528 15:00:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82027' 00:14:29.528 15:00:52 -- common/autotest_common.sh@955 -- # kill 82027 00:14:29.528 15:00:52 -- common/autotest_common.sh@960 -- # wait 82027 00:14:34.796 15:00:57 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:34.796 15:00:57 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:34.796 15:00:57 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:34.796 15:00:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.796 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.796 15:00:57 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:34.796 15:00:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:34.796 15:00:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:34.796 15:00:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.796 15:00:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:34.796 15:00:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.796 15:00:57 -- common/autotest_common.sh@1330 -- # shift 00:14:34.796 15:00:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:34.796 15:00:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.796 15:00:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.796 15:00:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:34.796 15:00:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:34.796 15:00:57 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.796 15:00:57 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.796 15:00:57 -- common/autotest_common.sh@1336 -- # break 00:14:34.796 15:00:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.796 15:00:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:34.796 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:34.796 fio-3.35 00:14:34.796 Starting 1 thread 00:14:38.100 00:14:38.100 test: (groupid=0, jobs=1): err= 0: pid=82222: Mon Nov 18 15:01:01 2024 00:14:38.100 read: IOPS=1372, BW=91.2MiB/s (95.6MB/s)(255MiB/2792msec) 00:14:38.100 slat (nsec): min=2896, max=15788, avg=3674.93, stdev=1390.86 00:14:38.100 clat (usec): min=238, max=1124, avg=325.24, stdev=58.61 00:14:38.100 lat (usec): min=242, max=1127, avg=328.91, stdev=59.14 00:14:38.100 clat percentiles (usec): 00:14:38.100 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 302], 00:14:38.100 | 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 314], 00:14:38.100 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 437], 00:14:38.100 | 99.00th=[ 578], 99.50th=[ 627], 99.90th=[ 848], 99.95th=[ 881], 00:14:38.100 | 99.99th=[ 1123] 00:14:38.100 write: IOPS=1382, BW=91.8MiB/s (96.3MB/s)(256MiB/2789msec); 0 zone resets 00:14:38.100 slat (usec): min=13, max=118, avg=17.25, stdev= 3.82 00:14:38.100 clat (usec): min=260, max=1176, avg=366.41, stdev=71.51 00:14:38.100 lat (usec): min=276, max=1236, avg=383.66, stdev=72.15 00:14:38.100 clat percentiles (usec): 00:14:38.100 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 330], 00:14:38.100 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 355], 00:14:38.100 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 494], 00:14:38.100 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 947], 99.95th=[ 1123], 00:14:38.100 | 99.99th=[ 1172] 00:14:38.100 bw ( KiB/s): min=88400, max=100368, per=100.00%, avg=94057.60, stdev=4693.08, samples=5 00:14:38.100 iops : min= 1300, max= 1476, avg=1383.20, stdev=69.02, samples=5 00:14:38.100 lat (usec) : 250=0.13%, 500=96.25%, 750=3.37%, 1000=0.21% 00:14:38.100 lat (msec) : 2=0.04% 00:14:38.100 cpu : usr=99.28%, sys=0.18%, ctx=7, majf=0, minf=1329 00:14:38.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.100 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.100 00:14:38.100 Run status group 0 (all jobs): 00:14:38.100 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=255MiB (267MB), run=2792-2792msec 00:14:38.100 WRITE: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=256MiB (269MB), run=2789-2789msec 00:14:38.366 ----------------------------------------------------- 00:14:38.366 Suppressions used: 00:14:38.366 count bytes template 00:14:38.366 1 5 /usr/src/fio/parse.c 00:14:38.366 1 8 libtcmalloc_minimal.so 00:14:38.366 1 904 libcrypto.so 00:14:38.366 ----------------------------------------------------- 00:14:38.366 00:14:38.366 15:01:01 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:38.366 15:01:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.366 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.366 15:01:01 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:38.366 15:01:01 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:38.366 15:01:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.366 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.366 15:01:01 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:38.366 15:01:01 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:38.366 15:01:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:38.366 15:01:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.366 15:01:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:38.366 15:01:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.366 15:01:01 -- common/autotest_common.sh@1330 -- # shift 00:14:38.366 15:01:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:38.366 15:01:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.366 15:01:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.366 15:01:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:38.366 15:01:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:38.366 15:01:01 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.366 15:01:01 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.366 15:01:01 -- common/autotest_common.sh@1336 -- # break 00:14:38.366 15:01:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.366 15:01:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:38.625 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:38.625 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:38.625 fio-3.35 00:14:38.625 Starting 2 threads 00:15:05.185 00:15:05.185 first_half: (groupid=0, jobs=1): err= 0: pid=82303: Mon Nov 18 15:01:25 2024 00:15:05.185 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(255MiB/22714msec) 00:15:05.185 slat (nsec): min=2922, max=46845, avg=4327.65, stdev=1240.08 00:15:05.185 clat (usec): min=553, max=297421, avg=35225.85, stdev=22880.35 00:15:05.185 lat (usec): min=557, max=297426, avg=35230.17, stdev=22880.64 00:15:05.185 clat percentiles (msec): 00:15:05.185 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:15:05.185 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:05.185 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 70], 00:15:05.185 | 99.00th=[ 150], 99.50th=[ 176], 99.90th=[ 251], 99.95th=[ 271], 00:15:05.185 | 99.99th=[ 279] 00:15:05.185 write: IOPS=3442, BW=13.4MiB/s (14.1MB/s)(256MiB/19038msec); 0 zone resets 00:15:05.185 slat (usec): min=3, max=603, avg= 6.06, stdev= 5.95 00:15:05.185 clat (usec): min=358, max=89742, avg=9274.47, stdev=15709.84 00:15:05.185 lat (usec): min=369, max=89746, avg=9280.52, stdev=15709.80 00:15:05.185 clat percentiles (usec): 00:15:05.185 | 1.00th=[ 750], 5.00th=[ 988], 10.00th=[ 1139], 20.00th=[ 1401], 00:15:05.185 | 30.00th=[ 2573], 40.00th=[ 3982], 50.00th=[ 4948], 60.00th=[ 5669], 00:15:05.185 | 70.00th=[ 7111], 80.00th=[10159], 90.00th=[13435], 95.00th=[62129], 00:15:05.185 | 99.00th=[74974], 99.50th=[81265], 99.90th=[87557], 99.95th=[87557], 00:15:05.185 | 99.99th=[88605] 00:15:05.185 bw ( KiB/s): min= 152, max=41664, per=93.75%, avg=24966.10, stdev=14727.27, samples=21 00:15:05.185 iops : min= 38, max=10416, avg=6241.52, stdev=3681.82, samples=21 00:15:05.185 lat (usec) : 500=0.03%, 750=0.48%, 1000=2.17% 00:15:05.185 lat (msec) : 2=11.09%, 4=6.65%, 10=19.79%, 20=6.95%, 50=46.77% 00:15:05.185 lat (msec) : 100=4.54%, 250=1.48%, 500=0.05% 00:15:05.185 cpu : usr=99.44%, sys=0.11%, ctx=44, majf=0, minf=5603 00:15:05.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:05.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.185 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.185 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.185 second_half: (groupid=0, jobs=1): err= 0: pid=82304: Mon Nov 18 15:01:25 2024 00:15:05.185 read: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(255MiB/22878msec) 00:15:05.185 slat (nsec): min=3026, max=23256, avg=4734.76, stdev=1137.54 00:15:05.185 clat (usec): min=546, max=301045, avg=33966.75, stdev=23678.47 00:15:05.185 lat (usec): min=551, max=301049, avg=33971.48, stdev=23678.50 00:15:05.185 clat percentiles (msec): 00:15:05.185 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:15:05.185 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:15:05.185 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 45], 00:15:05.185 | 99.00th=[ 165], 99.50th=[ 213], 99.90th=[ 266], 99.95th=[ 275], 00:15:05.185 | 99.99th=[ 300] 00:15:05.185 write: IOPS=3328, BW=13.0MiB/s (13.6MB/s)(256MiB/19688msec); 0 zone resets 00:15:05.185 slat (usec): min=3, max=378, avg= 5.94, stdev= 3.17 00:15:05.185 clat (usec): min=369, max=90769, avg=10794.23, stdev=17527.22 00:15:05.185 lat (usec): min=380, max=90775, avg=10800.17, stdev=17527.17 00:15:05.185 clat percentiles (usec): 00:15:05.185 | 1.00th=[ 701], 5.00th=[ 996], 10.00th=[ 1254], 20.00th=[ 2057], 00:15:05.185 | 30.00th=[ 2868], 40.00th=[ 4113], 50.00th=[ 5145], 60.00th=[ 5932], 00:15:05.185 | 70.00th=[ 7898], 80.00th=[10683], 90.00th=[25560], 95.00th=[64750], 00:15:05.185 | 99.00th=[76022], 99.50th=[80217], 99.90th=[88605], 99.95th=[88605], 00:15:05.185 | 99.99th=[90702] 00:15:05.185 bw ( KiB/s): min= 1440, max=46816, per=78.75%, avg=20971.52, stdev=13010.03, samples=25 00:15:05.185 iops : min= 360, max=11704, avg=5242.88, stdev=3252.51, samples=25 00:15:05.185 lat (usec) : 500=0.01%, 750=0.77%, 1000=1.76% 00:15:05.185 lat (msec) : 2=7.12%, 4=10.20%, 10=19.96%, 20=7.00%, 50=47.20% 00:15:05.185 lat (msec) : 100=4.70%, 250=1.21%, 500=0.08% 00:15:05.185 cpu : usr=99.41%, sys=0.11%, ctx=32, majf=0, minf=5533 00:15:05.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:05.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.185 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.185 issued rwts: total=65325,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.185 00:15:05.185 Run status group 0 (all jobs): 00:15:05.185 READ: bw=22.3MiB/s (23.4MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.8MB/s), io=510MiB (535MB), run=22714-22878msec 00:15:05.185 WRITE: bw=26.0MiB/s (27.3MB/s), 13.0MiB/s-13.4MiB/s (13.6MB/s-14.1MB/s), io=512MiB (537MB), run=19038-19688msec 00:15:05.185 ----------------------------------------------------- 00:15:05.185 Suppressions used: 00:15:05.185 count bytes template 00:15:05.185 2 10 /usr/src/fio/parse.c 00:15:05.185 2 192 /usr/src/fio/iolog.c 00:15:05.185 1 8 libtcmalloc_minimal.so 00:15:05.185 1 904 libcrypto.so 00:15:05.185 ----------------------------------------------------- 00:15:05.185 00:15:05.185 15:01:26 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:05.185 15:01:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.186 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:15:05.186 15:01:26 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:05.186 15:01:26 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:05.186 15:01:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.186 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:15:05.186 15:01:26 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:05.186 15:01:26 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:05.186 15:01:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:05.186 15:01:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:05.186 15:01:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:05.186 15:01:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:05.186 15:01:26 -- common/autotest_common.sh@1330 -- # shift 00:15:05.186 15:01:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:05.186 15:01:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:05.186 15:01:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:05.186 15:01:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:05.186 15:01:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:05.186 15:01:27 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:05.186 15:01:27 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:05.186 15:01:27 -- common/autotest_common.sh@1336 -- # break 00:15:05.186 15:01:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:05.186 15:01:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:05.186 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:05.186 fio-3.35 00:15:05.186 Starting 1 thread 00:15:17.398 00:15:17.398 test: (groupid=0, jobs=1): err= 0: pid=82602: Mon Nov 18 15:01:39 2024 00:15:17.398 read: IOPS=8062, BW=31.5MiB/s (33.0MB/s)(255MiB/8087msec) 00:15:17.398 slat (nsec): min=2938, max=23734, avg=4818.88, stdev=939.90 00:15:17.398 clat (usec): min=515, max=30865, avg=15866.13, stdev=1687.61 00:15:17.398 lat (usec): min=519, max=30870, avg=15870.95, stdev=1687.64 00:15:17.398 clat percentiles (usec): 00:15:17.398 | 1.00th=[14615], 5.00th=[14877], 10.00th=[14877], 20.00th=[15139], 00:15:17.398 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:15:17.398 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16319], 95.00th=[20055], 00:15:17.398 | 99.00th=[23200], 99.50th=[24511], 99.90th=[28705], 99.95th=[30016], 00:15:17.398 | 99.99th=[30540] 00:15:17.398 write: IOPS=16.3k, BW=63.6MiB/s (66.6MB/s)(256MiB/4028msec); 0 zone resets 00:15:17.398 slat (usec): min=4, max=200, avg= 6.89, stdev= 2.48 00:15:17.398 clat (usec): min=499, max=46044, avg=7824.64, stdev=9607.22 00:15:17.398 lat (usec): min=506, max=46050, avg=7831.53, stdev=9607.18 00:15:17.398 clat percentiles (usec): 00:15:17.398 | 1.00th=[ 619], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 889], 00:15:17.398 | 30.00th=[ 1057], 40.00th=[ 1450], 50.00th=[ 5407], 60.00th=[ 6259], 00:15:17.398 | 70.00th=[ 7242], 80.00th=[ 8455], 90.00th=[27919], 95.00th=[29492], 00:15:17.398 | 99.00th=[32113], 99.50th=[35914], 99.90th=[37487], 99.95th=[38011], 00:15:17.398 | 99.99th=[43779] 00:15:17.398 bw ( KiB/s): min= 1608, max=83592, per=89.46%, avg=58220.00, stdev=23093.08, samples=9 00:15:17.398 iops : min= 402, max=20898, avg=14555.00, stdev=5773.27, samples=9 00:15:17.398 lat (usec) : 500=0.01%, 750=5.69%, 1000=7.82% 00:15:17.398 lat (msec) : 2=7.08%, 4=0.57%, 10=20.39%, 20=48.01%, 50=10.45% 00:15:17.398 cpu : usr=99.28%, sys=0.23%, ctx=34, majf=0, minf=5577 00:15:17.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:17.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.398 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.398 00:15:17.398 Run status group 0 (all jobs): 00:15:17.398 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=255MiB (267MB), run=8087-8087msec 00:15:17.398 WRITE: bw=63.6MiB/s (66.6MB/s), 63.6MiB/s-63.6MiB/s (66.6MB/s-66.6MB/s), io=256MiB (268MB), run=4028-4028msec 00:15:17.398 ----------------------------------------------------- 00:15:17.398 Suppressions used: 00:15:17.398 count bytes template 00:15:17.398 1 5 /usr/src/fio/parse.c 00:15:17.398 2 192 /usr/src/fio/iolog.c 00:15:17.398 1 8 libtcmalloc_minimal.so 00:15:17.398 1 904 libcrypto.so 00:15:17.398 ----------------------------------------------------- 00:15:17.398 00:15:17.398 15:01:40 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:17.398 15:01:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.398 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 15:01:40 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:17.398 Remove shared memory files 00:15:17.398 15:01:40 -- ftl/fio.sh@85 -- # remove_shm 00:15:17.398 15:01:40 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:17.398 15:01:40 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:17.398 15:01:40 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:17.398 15:01:40 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68541 /dev/shm/spdk_tgt_trace.pid80972 00:15:17.398 15:01:40 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:17.398 15:01:40 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:17.398 ************************************ 00:15:17.398 END TEST ftl_fio_basic 00:15:17.398 ************************************ 00:15:17.398 00:15:17.398 real 0m54.364s 00:15:17.398 user 2m3.724s 00:15:17.398 sys 0m2.550s 00:15:17.398 15:01:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:17.398 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 15:01:40 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:17.398 15:01:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:17.398 15:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.398 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 ************************************ 00:15:17.398 START TEST ftl_bdevperf 00:15:17.398 ************************************ 00:15:17.398 15:01:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:17.398 * Looking for test storage... 00:15:17.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.398 15:01:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:17.398 15:01:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:17.398 15:01:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:17.398 15:01:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:17.398 15:01:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:17.398 15:01:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:17.398 15:01:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:17.398 15:01:40 -- scripts/common.sh@335 -- # IFS=.-: 00:15:17.398 15:01:40 -- scripts/common.sh@335 -- # read -ra ver1 00:15:17.398 15:01:40 -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.398 15:01:40 -- scripts/common.sh@336 -- # read -ra ver2 00:15:17.398 15:01:40 -- scripts/common.sh@337 -- # local 'op=<' 00:15:17.398 15:01:40 -- scripts/common.sh@339 -- # ver1_l=2 00:15:17.398 15:01:40 -- scripts/common.sh@340 -- # ver2_l=1 00:15:17.398 15:01:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:17.398 15:01:40 -- scripts/common.sh@343 -- # case "$op" in 00:15:17.398 15:01:40 -- scripts/common.sh@344 -- # : 1 00:15:17.398 15:01:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:17.398 15:01:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.398 15:01:40 -- scripts/common.sh@364 -- # decimal 1 00:15:17.398 15:01:40 -- scripts/common.sh@352 -- # local d=1 00:15:17.398 15:01:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.398 15:01:40 -- scripts/common.sh@354 -- # echo 1 00:15:17.398 15:01:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:17.398 15:01:40 -- scripts/common.sh@365 -- # decimal 2 00:15:17.398 15:01:40 -- scripts/common.sh@352 -- # local d=2 00:15:17.398 15:01:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.398 15:01:40 -- scripts/common.sh@354 -- # echo 2 00:15:17.398 15:01:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:17.398 15:01:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:17.398 15:01:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:17.398 15:01:40 -- scripts/common.sh@367 -- # return 0 00:15:17.399 15:01:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.399 15:01:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.399 --rc genhtml_branch_coverage=1 00:15:17.399 --rc genhtml_function_coverage=1 00:15:17.399 --rc genhtml_legend=1 00:15:17.399 --rc geninfo_all_blocks=1 00:15:17.399 --rc geninfo_unexecuted_blocks=1 00:15:17.399 00:15:17.399 ' 00:15:17.399 15:01:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.399 --rc genhtml_branch_coverage=1 00:15:17.399 --rc genhtml_function_coverage=1 00:15:17.399 --rc genhtml_legend=1 00:15:17.399 --rc geninfo_all_blocks=1 00:15:17.399 --rc geninfo_unexecuted_blocks=1 00:15:17.399 00:15:17.399 ' 00:15:17.399 15:01:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.399 --rc genhtml_branch_coverage=1 00:15:17.399 --rc genhtml_function_coverage=1 00:15:17.399 --rc genhtml_legend=1 00:15:17.399 --rc geninfo_all_blocks=1 00:15:17.399 --rc geninfo_unexecuted_blocks=1 00:15:17.399 00:15:17.399 ' 00:15:17.399 15:01:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:17.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.399 --rc genhtml_branch_coverage=1 00:15:17.399 --rc genhtml_function_coverage=1 00:15:17.399 --rc genhtml_legend=1 00:15:17.399 --rc geninfo_all_blocks=1 00:15:17.399 --rc geninfo_unexecuted_blocks=1 00:15:17.399 00:15:17.399 ' 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:17.399 15:01:40 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:17.399 15:01:40 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.399 15:01:40 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.399 15:01:40 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:17.399 15:01:40 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:17.399 15:01:40 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.399 15:01:40 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:17.399 15:01:40 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:17.399 15:01:40 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.399 15:01:40 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.399 15:01:40 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:17.399 15:01:40 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:17.399 15:01:40 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.399 15:01:40 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.399 15:01:40 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:17.399 15:01:40 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:17.399 15:01:40 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.399 15:01:40 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.399 15:01:40 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:17.399 15:01:40 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:17.399 15:01:40 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.399 15:01:40 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.399 15:01:40 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.399 15:01:40 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.399 15:01:40 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:17.399 15:01:40 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:17.399 15:01:40 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.399 15:01:40 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:17.399 15:01:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.399 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=82819 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:17.399 15:01:40 -- ftl/bdevperf.sh@22 -- # waitforlisten 82819 00:15:17.399 15:01:40 -- common/autotest_common.sh@829 -- # '[' -z 82819 ']' 00:15:17.399 15:01:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.399 15:01:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.399 15:01:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.399 15:01:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.399 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.399 [2024-11-18 15:01:40.810994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.399 [2024-11-18 15:01:40.811269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82819 ] 00:15:17.399 [2024-11-18 15:01:40.957569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.657 [2024-11-18 15:01:40.997121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.224 15:01:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.224 15:01:41 -- common/autotest_common.sh@862 -- # return 0 00:15:18.224 15:01:41 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:18.224 15:01:41 -- ftl/common.sh@54 -- # local name=nvme0 00:15:18.224 15:01:41 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:18.224 15:01:41 -- ftl/common.sh@56 -- # local size=103424 00:15:18.224 15:01:41 -- ftl/common.sh@59 -- # local base_bdev 00:15:18.224 15:01:41 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:18.483 15:01:41 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:18.483 15:01:41 -- ftl/common.sh@62 -- # local base_size 00:15:18.483 15:01:41 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:18.483 15:01:41 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:18.483 15:01:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:18.483 15:01:41 -- common/autotest_common.sh@1369 -- # local bs 00:15:18.483 15:01:41 -- common/autotest_common.sh@1370 -- # local nb 00:15:18.483 15:01:41 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:18.742 15:01:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:18.742 { 00:15:18.742 "name": "nvme0n1", 00:15:18.742 "aliases": [ 00:15:18.742 "ae3cbdd1-4088-4d63-9378-0389bd249a49" 00:15:18.742 ], 00:15:18.742 "product_name": "NVMe disk", 00:15:18.742 "block_size": 4096, 00:15:18.742 "num_blocks": 1310720, 00:15:18.742 "uuid": "ae3cbdd1-4088-4d63-9378-0389bd249a49", 00:15:18.742 "assigned_rate_limits": { 00:15:18.742 "rw_ios_per_sec": 0, 00:15:18.742 "rw_mbytes_per_sec": 0, 00:15:18.742 "r_mbytes_per_sec": 0, 00:15:18.742 "w_mbytes_per_sec": 0 00:15:18.742 }, 00:15:18.742 "claimed": true, 00:15:18.742 "claim_type": "read_many_write_one", 00:15:18.742 "zoned": false, 00:15:18.742 "supported_io_types": { 00:15:18.742 "read": true, 00:15:18.742 "write": true, 00:15:18.742 "unmap": true, 00:15:18.742 "write_zeroes": true, 00:15:18.742 "flush": true, 00:15:18.742 "reset": true, 00:15:18.742 "compare": true, 00:15:18.742 "compare_and_write": false, 00:15:18.742 "abort": true, 00:15:18.742 "nvme_admin": true, 00:15:18.742 "nvme_io": true 00:15:18.742 }, 00:15:18.742 "driver_specific": { 00:15:18.742 "nvme": [ 00:15:18.742 { 00:15:18.742 "pci_address": "0000:00:07.0", 00:15:18.742 "trid": { 00:15:18.742 "trtype": "PCIe", 00:15:18.742 "traddr": "0000:00:07.0" 00:15:18.742 }, 00:15:18.742 "ctrlr_data": { 00:15:18.742 "cntlid": 0, 00:15:18.742 "vendor_id": "0x1b36", 00:15:18.742 "model_number": "QEMU NVMe Ctrl", 00:15:18.742 "serial_number": "12341", 00:15:18.742 "firmware_revision": "8.0.0", 00:15:18.742 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:18.742 "oacs": { 00:15:18.742 "security": 0, 00:15:18.742 "format": 1, 00:15:18.742 "firmware": 0, 00:15:18.742 "ns_manage": 1 00:15:18.742 }, 00:15:18.742 "multi_ctrlr": false, 00:15:18.742 "ana_reporting": false 00:15:18.742 }, 00:15:18.742 "vs": { 00:15:18.742 "nvme_version": "1.4" 00:15:18.742 }, 00:15:18.742 "ns_data": { 00:15:18.742 "id": 1, 00:15:18.742 "can_share": false 00:15:18.742 } 00:15:18.742 } 00:15:18.742 ], 00:15:18.742 "mp_policy": "active_passive" 00:15:18.742 } 00:15:18.742 } 00:15:18.742 ]' 00:15:18.742 15:01:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:18.742 15:01:42 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:18.742 15:01:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:18.742 15:01:42 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:18.742 15:01:42 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:18.742 15:01:42 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:18.742 15:01:42 -- ftl/common.sh@63 -- # base_size=5120 00:15:18.742 15:01:42 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:18.742 15:01:42 -- ftl/common.sh@67 -- # clear_lvols 00:15:18.742 15:01:42 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:18.742 15:01:42 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:19.001 15:01:42 -- ftl/common.sh@28 -- # stores=49244a58-e601-4689-99fb-0feaad60461d 00:15:19.001 15:01:42 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:19.001 15:01:42 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49244a58-e601-4689-99fb-0feaad60461d 00:15:19.001 15:01:42 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:19.259 15:01:42 -- ftl/common.sh@68 -- # lvs=6d12ca35-7e03-45f4-b24a-43b4d7272ffb 00:15:19.259 15:01:42 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6d12ca35-7e03-45f4-b24a-43b4d7272ffb 00:15:19.518 15:01:42 -- ftl/bdevperf.sh@23 -- # split_bdev=5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.518 15:01:42 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.518 15:01:42 -- ftl/common.sh@35 -- # local name=nvc0 00:15:19.518 15:01:42 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:19.518 15:01:42 -- ftl/common.sh@37 -- # local base_bdev=5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.518 15:01:42 -- ftl/common.sh@38 -- # local cache_size= 00:15:19.518 15:01:42 -- ftl/common.sh@41 -- # get_bdev_size 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.518 15:01:42 -- common/autotest_common.sh@1367 -- # local bdev_name=5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.518 15:01:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:19.518 15:01:42 -- common/autotest_common.sh@1369 -- # local bs 00:15:19.518 15:01:42 -- common/autotest_common.sh@1370 -- # local nb 00:15:19.518 15:01:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:19.777 15:01:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:19.777 { 00:15:19.777 "name": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:19.777 "aliases": [ 00:15:19.777 "lvs/nvme0n1p0" 00:15:19.777 ], 00:15:19.777 "product_name": "Logical Volume", 00:15:19.777 "block_size": 4096, 00:15:19.777 "num_blocks": 26476544, 00:15:19.777 "uuid": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:19.777 "assigned_rate_limits": { 00:15:19.777 "rw_ios_per_sec": 0, 00:15:19.777 "rw_mbytes_per_sec": 0, 00:15:19.777 "r_mbytes_per_sec": 0, 00:15:19.777 "w_mbytes_per_sec": 0 00:15:19.777 }, 00:15:19.777 "claimed": false, 00:15:19.777 "zoned": false, 00:15:19.777 "supported_io_types": { 00:15:19.777 "read": true, 00:15:19.777 "write": true, 00:15:19.777 "unmap": true, 00:15:19.777 "write_zeroes": true, 00:15:19.777 "flush": false, 00:15:19.777 "reset": true, 00:15:19.777 "compare": false, 00:15:19.777 "compare_and_write": false, 00:15:19.777 "abort": false, 00:15:19.777 "nvme_admin": false, 00:15:19.777 "nvme_io": false 00:15:19.777 }, 00:15:19.777 "driver_specific": { 00:15:19.777 "lvol": { 00:15:19.777 "lvol_store_uuid": "6d12ca35-7e03-45f4-b24a-43b4d7272ffb", 00:15:19.777 "base_bdev": "nvme0n1", 00:15:19.777 "thin_provision": true, 00:15:19.777 "snapshot": false, 00:15:19.777 "clone": false, 00:15:19.777 "esnap_clone": false 00:15:19.777 } 00:15:19.777 } 00:15:19.777 } 00:15:19.777 ]' 00:15:19.777 15:01:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:19.777 15:01:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:19.777 15:01:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:19.777 15:01:43 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:19.777 15:01:43 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:19.777 15:01:43 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:19.777 15:01:43 -- ftl/common.sh@41 -- # local base_size=5171 00:15:19.777 15:01:43 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:19.777 15:01:43 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:20.050 15:01:43 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:20.050 15:01:43 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:20.050 15:01:43 -- ftl/common.sh@48 -- # get_bdev_size 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.050 15:01:43 -- common/autotest_common.sh@1367 -- # local bdev_name=5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.050 15:01:43 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:20.050 15:01:43 -- common/autotest_common.sh@1369 -- # local bs 00:15:20.050 15:01:43 -- common/autotest_common.sh@1370 -- # local nb 00:15:20.050 15:01:43 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.362 15:01:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:20.362 { 00:15:20.362 "name": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:20.362 "aliases": [ 00:15:20.362 "lvs/nvme0n1p0" 00:15:20.362 ], 00:15:20.363 "product_name": "Logical Volume", 00:15:20.363 "block_size": 4096, 00:15:20.363 "num_blocks": 26476544, 00:15:20.363 "uuid": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:20.363 "assigned_rate_limits": { 00:15:20.363 "rw_ios_per_sec": 0, 00:15:20.363 "rw_mbytes_per_sec": 0, 00:15:20.363 "r_mbytes_per_sec": 0, 00:15:20.363 "w_mbytes_per_sec": 0 00:15:20.363 }, 00:15:20.363 "claimed": false, 00:15:20.363 "zoned": false, 00:15:20.363 "supported_io_types": { 00:15:20.363 "read": true, 00:15:20.363 "write": true, 00:15:20.363 "unmap": true, 00:15:20.363 "write_zeroes": true, 00:15:20.363 "flush": false, 00:15:20.363 "reset": true, 00:15:20.363 "compare": false, 00:15:20.363 "compare_and_write": false, 00:15:20.363 "abort": false, 00:15:20.363 "nvme_admin": false, 00:15:20.363 "nvme_io": false 00:15:20.363 }, 00:15:20.363 "driver_specific": { 00:15:20.363 "lvol": { 00:15:20.363 "lvol_store_uuid": "6d12ca35-7e03-45f4-b24a-43b4d7272ffb", 00:15:20.363 "base_bdev": "nvme0n1", 00:15:20.363 "thin_provision": true, 00:15:20.363 "snapshot": false, 00:15:20.363 "clone": false, 00:15:20.363 "esnap_clone": false 00:15:20.363 } 00:15:20.363 } 00:15:20.363 } 00:15:20.363 ]' 00:15:20.363 15:01:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:20.363 15:01:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:20.363 15:01:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:20.363 15:01:43 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:20.363 15:01:43 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:20.363 15:01:43 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:20.363 15:01:43 -- ftl/common.sh@48 -- # cache_size=5171 00:15:20.363 15:01:43 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:20.363 15:01:43 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:20.363 15:01:43 -- ftl/bdevperf.sh@26 -- # get_bdev_size 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.363 15:01:43 -- common/autotest_common.sh@1367 -- # local bdev_name=5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.363 15:01:43 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:20.363 15:01:43 -- common/autotest_common.sh@1369 -- # local bs 00:15:20.363 15:01:43 -- common/autotest_common.sh@1370 -- # local nb 00:15:20.363 15:01:43 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 00:15:20.621 15:01:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:20.621 { 00:15:20.621 "name": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:20.621 "aliases": [ 00:15:20.621 "lvs/nvme0n1p0" 00:15:20.621 ], 00:15:20.621 "product_name": "Logical Volume", 00:15:20.621 "block_size": 4096, 00:15:20.621 "num_blocks": 26476544, 00:15:20.621 "uuid": "5bcd90e1-3500-4cbf-833c-19a02a2d24e8", 00:15:20.621 "assigned_rate_limits": { 00:15:20.621 "rw_ios_per_sec": 0, 00:15:20.621 "rw_mbytes_per_sec": 0, 00:15:20.621 "r_mbytes_per_sec": 0, 00:15:20.621 "w_mbytes_per_sec": 0 00:15:20.621 }, 00:15:20.621 "claimed": false, 00:15:20.621 "zoned": false, 00:15:20.621 "supported_io_types": { 00:15:20.621 "read": true, 00:15:20.621 "write": true, 00:15:20.621 "unmap": true, 00:15:20.621 "write_zeroes": true, 00:15:20.621 "flush": false, 00:15:20.621 "reset": true, 00:15:20.621 "compare": false, 00:15:20.621 "compare_and_write": false, 00:15:20.621 "abort": false, 00:15:20.621 "nvme_admin": false, 00:15:20.621 "nvme_io": false 00:15:20.621 }, 00:15:20.621 "driver_specific": { 00:15:20.621 "lvol": { 00:15:20.621 "lvol_store_uuid": "6d12ca35-7e03-45f4-b24a-43b4d7272ffb", 00:15:20.621 "base_bdev": "nvme0n1", 00:15:20.621 "thin_provision": true, 00:15:20.621 "snapshot": false, 00:15:20.621 "clone": false, 00:15:20.621 "esnap_clone": false 00:15:20.621 } 00:15:20.621 } 00:15:20.621 } 00:15:20.621 ]' 00:15:20.621 15:01:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:20.621 15:01:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:20.621 15:01:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:20.621 15:01:44 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:20.621 15:01:44 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:20.621 15:01:44 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:20.621 15:01:44 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:20.621 15:01:44 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5bcd90e1-3500-4cbf-833c-19a02a2d24e8 -c nvc0n1p0 --l2p_dram_limit 20 00:15:20.881 [2024-11-18 15:01:44.331571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.881 [2024-11-18 15:01:44.331617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:20.881 [2024-11-18 15:01:44.331634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:20.881 [2024-11-18 15:01:44.331641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.881 [2024-11-18 15:01:44.331696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.881 [2024-11-18 15:01:44.331706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.881 [2024-11-18 15:01:44.331719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:20.881 [2024-11-18 15:01:44.331725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.881 [2024-11-18 15:01:44.331744] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:20.881 [2024-11-18 15:01:44.331986] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:20.881 [2024-11-18 15:01:44.331999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.881 [2024-11-18 15:01:44.332008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.881 [2024-11-18 15:01:44.332016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:15:20.881 [2024-11-18 15:01:44.332024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.881 [2024-11-18 15:01:44.332048] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49a46cc7-8150-4cd6-a4f7-9d4b56b59c4d 00:15:20.881 [2024-11-18 15:01:44.333354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.333386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:20.882 [2024-11-18 15:01:44.333395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:20.882 [2024-11-18 15:01:44.333403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.340302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.340344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.882 [2024-11-18 15:01:44.340352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.842 ms 00:15:20.882 [2024-11-18 15:01:44.340362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.340438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.340449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.882 [2024-11-18 15:01:44.340456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:20.882 [2024-11-18 15:01:44.340464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.340508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.340517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:20.882 [2024-11-18 15:01:44.340524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:20.882 [2024-11-18 15:01:44.340531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.340550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:20.882 [2024-11-18 15:01:44.342189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.342216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.882 [2024-11-18 15:01:44.342229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:15:20.882 [2024-11-18 15:01:44.342238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.342531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.342564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:20.882 [2024-11-18 15:01:44.342593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:20.882 [2024-11-18 15:01:44.342618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.342707] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:20.882 [2024-11-18 15:01:44.342839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:20.882 [2024-11-18 15:01:44.342860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:20.882 [2024-11-18 15:01:44.342868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:20.882 [2024-11-18 15:01:44.342879] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:20.882 [2024-11-18 15:01:44.342886] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:20.882 [2024-11-18 15:01:44.342897] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:20.882 [2024-11-18 15:01:44.342903] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:20.882 [2024-11-18 15:01:44.342911] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:20.882 [2024-11-18 15:01:44.342919] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:20.882 [2024-11-18 15:01:44.342931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.342938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:20.882 [2024-11-18 15:01:44.342946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:15:20.882 [2024-11-18 15:01:44.342951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.343003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.882 [2024-11-18 15:01:44.343014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:20.882 [2024-11-18 15:01:44.343023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:20.882 [2024-11-18 15:01:44.343028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.882 [2024-11-18 15:01:44.343087] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:20.882 [2024-11-18 15:01:44.343096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:20.882 [2024-11-18 15:01:44.343108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:20.882 [2024-11-18 15:01:44.343127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343141] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:20.882 [2024-11-18 15:01:44.343148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.882 [2024-11-18 15:01:44.343159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:20.882 [2024-11-18 15:01:44.343164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:20.882 [2024-11-18 15:01:44.343172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.882 [2024-11-18 15:01:44.343178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:20.882 [2024-11-18 15:01:44.343187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:20.882 [2024-11-18 15:01:44.343192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:20.882 [2024-11-18 15:01:44.343203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:20.882 [2024-11-18 15:01:44.343210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:20.882 [2024-11-18 15:01:44.343404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:20.882 [2024-11-18 15:01:44.343410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:20.882 [2024-11-18 15:01:44.343424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343438] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:20.882 [2024-11-18 15:01:44.343446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:20.882 [2024-11-18 15:01:44.343467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:20.882 [2024-11-18 15:01:44.343492] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:20.882 [2024-11-18 15:01:44.343513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.882 [2024-11-18 15:01:44.343526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:20.882 [2024-11-18 15:01:44.343533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:20.882 [2024-11-18 15:01:44.343538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.882 [2024-11-18 15:01:44.343547] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:20.882 [2024-11-18 15:01:44.343557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:20.882 [2024-11-18 15:01:44.343565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.882 [2024-11-18 15:01:44.343581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:20.882 [2024-11-18 15:01:44.343588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:20.882 [2024-11-18 15:01:44.343597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:20.882 [2024-11-18 15:01:44.343602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:20.882 [2024-11-18 15:01:44.343609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:20.882 [2024-11-18 15:01:44.343615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:20.882 [2024-11-18 15:01:44.343623] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:20.882 [2024-11-18 15:01:44.343632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.882 [2024-11-18 15:01:44.343644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:20.882 [2024-11-18 15:01:44.343650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:20.882 [2024-11-18 15:01:44.343657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:20.882 [2024-11-18 15:01:44.343663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:20.882 [2024-11-18 15:01:44.343672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:20.883 [2024-11-18 15:01:44.343678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:20.883 [2024-11-18 15:01:44.343685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:20.883 [2024-11-18 15:01:44.343691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:20.883 [2024-11-18 15:01:44.343701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:20.883 [2024-11-18 15:01:44.343707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:20.883 [2024-11-18 15:01:44.343717] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:20.883 [2024-11-18 15:01:44.343724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:20.883 [2024-11-18 15:01:44.343733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:20.883 [2024-11-18 15:01:44.343739] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:20.883 [2024-11-18 15:01:44.343749] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.883 [2024-11-18 15:01:44.343758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:20.883 [2024-11-18 15:01:44.343766] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:20.883 [2024-11-18 15:01:44.343771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:20.883 [2024-11-18 15:01:44.343778] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:20.883 [2024-11-18 15:01:44.343783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.343793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:20.883 [2024-11-18 15:01:44.343799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:15:20.883 [2024-11-18 15:01:44.343806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.351015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.351053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.883 [2024-11-18 15:01:44.351062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.180 ms 00:15:20.883 [2024-11-18 15:01:44.351070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.351140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.351153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:20.883 [2024-11-18 15:01:44.351162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:20.883 [2024-11-18 15:01:44.351171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.370443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.370491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.883 [2024-11-18 15:01:44.370504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.237 ms 00:15:20.883 [2024-11-18 15:01:44.370514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.370547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.370560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.883 [2024-11-18 15:01:44.370570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:20.883 [2024-11-18 15:01:44.370579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.371068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.371093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.883 [2024-11-18 15:01:44.371104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:15:20.883 [2024-11-18 15:01:44.371114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.371246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.371268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.883 [2024-11-18 15:01:44.371282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:15:20.883 [2024-11-18 15:01:44.371295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.378573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.378624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.883 [2024-11-18 15:01:44.378641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.236 ms 00:15:20.883 [2024-11-18 15:01:44.378665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.387141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:20.883 [2024-11-18 15:01:44.392653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.392678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:20.883 [2024-11-18 15:01:44.392689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.892 ms 00:15:20.883 [2024-11-18 15:01:44.392695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.451643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.883 [2024-11-18 15:01:44.451823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:20.883 [2024-11-18 15:01:44.451846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.922 ms 00:15:20.883 [2024-11-18 15:01:44.451856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.883 [2024-11-18 15:01:44.451892] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:20.883 [2024-11-18 15:01:44.451902] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:23.413 [2024-11-18 15:01:46.918855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.413 [2024-11-18 15:01:46.918928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:23.413 [2024-11-18 15:01:46.918948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2466.939 ms 00:15:23.413 [2024-11-18 15:01:46.918957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.413 [2024-11-18 15:01:46.919154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.413 [2024-11-18 15:01:46.919166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:23.413 [2024-11-18 15:01:46.919182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:15:23.413 [2024-11-18 15:01:46.919190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.413 [2024-11-18 15:01:46.922273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.413 [2024-11-18 15:01:46.922312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:23.413 [2024-11-18 15:01:46.922341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:15:23.413 [2024-11-18 15:01:46.922349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.413 [2024-11-18 15:01:46.924648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.413 [2024-11-18 15:01:46.924853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:23.413 [2024-11-18 15:01:46.924875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:15:23.413 [2024-11-18 15:01:46.924884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.413 [2024-11-18 15:01:46.925064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.925076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:23.414 [2024-11-18 15:01:46.925091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:15:23.414 [2024-11-18 15:01:46.925098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.946580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.946622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:23.414 [2024-11-18 15:01:46.946636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.462 ms 00:15:23.414 [2024-11-18 15:01:46.946649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.951174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.951211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:23.414 [2024-11-18 15:01:46.951227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.489 ms 00:15:23.414 [2024-11-18 15:01:46.951238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.952607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.952638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:23.414 [2024-11-18 15:01:46.952653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:15:23.414 [2024-11-18 15:01:46.952664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.955858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.955889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:23.414 [2024-11-18 15:01:46.955900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:15:23.414 [2024-11-18 15:01:46.955907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.955948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.955957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:23.414 [2024-11-18 15:01:46.955968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:23.414 [2024-11-18 15:01:46.955978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.956046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.414 [2024-11-18 15:01:46.956055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:23.414 [2024-11-18 15:01:46.956066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:23.414 [2024-11-18 15:01:46.956074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.414 [2024-11-18 15:01:46.956999] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2625.011 ms, result 0 00:15:23.414 { 00:15:23.414 "name": "ftl0", 00:15:23.414 "uuid": "49a46cc7-8150-4cd6-a4f7-9d4b56b59c4d" 00:15:23.414 } 00:15:23.414 15:01:46 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:23.414 15:01:46 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:23.414 15:01:46 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:23.672 15:01:47 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:23.672 [2024-11-18 15:01:47.249990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:23.672 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:23.672 Zero copy mechanism will not be used. 00:15:23.672 Running I/O for 4 seconds... 00:15:27.857 00:15:27.857 Latency(us) 00:15:27.857 [2024-11-18T15:01:51.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.857 [2024-11-18T15:01:51.447Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:27.857 ftl0 : 4.00 3035.46 201.57 0.00 0.00 346.08 146.51 3251.59 00:15:27.857 [2024-11-18T15:01:51.447Z] =================================================================================================================== 00:15:27.857 [2024-11-18T15:01:51.447Z] Total : 3035.46 201.57 0.00 0.00 346.08 146.51 3251.59 00:15:27.857 0 00:15:27.857 [2024-11-18 15:01:51.256553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:27.857 15:01:51 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:27.857 [2024-11-18 15:01:51.353206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:27.857 Running I/O for 4 seconds... 00:15:32.041 00:15:32.041 Latency(us) 00:15:32.041 [2024-11-18T15:01:55.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.041 [2024-11-18T15:01:55.631Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.041 ftl0 : 4.01 11053.68 43.18 0.00 0.00 11558.62 212.68 25811.10 00:15:32.041 [2024-11-18T15:01:55.631Z] =================================================================================================================== 00:15:32.041 [2024-11-18T15:01:55.631Z] Total : 11053.68 43.18 0.00 0.00 11558.62 0.00 25811.10 00:15:32.041 0 00:15:32.041 [2024-11-18 15:01:55.371911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:32.041 15:01:55 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:32.041 [2024-11-18 15:01:55.469766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:32.041 Running I/O for 4 seconds... 00:15:36.230 00:15:36.230 Latency(us) 00:15:36.230 [2024-11-18T15:01:59.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.230 [2024-11-18T15:01:59.820Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:36.230 Verification LBA range: start 0x0 length 0x1400000 00:15:36.230 ftl0 : 4.00 12882.54 50.32 0.00 0.00 9912.60 142.57 20467.40 00:15:36.230 [2024-11-18T15:01:59.820Z] =================================================================================================================== 00:15:36.230 [2024-11-18T15:01:59.820Z] Total : 12882.54 50.32 0.00 0.00 9912.60 0.00 20467.40 00:15:36.230 [2024-11-18 15:01:59.480377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:36.230 0 00:15:36.230 15:01:59 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:36.230 [2024-11-18 15:01:59.672664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.672708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:36.230 [2024-11-18 15:01:59.672721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:36.230 [2024-11-18 15:01:59.672728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.672752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:36.230 [2024-11-18 15:01:59.673247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.673267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:36.230 [2024-11-18 15:01:59.673275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:15:36.230 [2024-11-18 15:01:59.673284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.674908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.675074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:36.230 [2024-11-18 15:01:59.675088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:15:36.230 [2024-11-18 15:01:59.675100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.793552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.793693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:36.230 [2024-11-18 15:01:59.793711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.436 ms 00:15:36.230 [2024-11-18 15:01:59.793719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.798285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.798312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:36.230 [2024-11-18 15:01:59.798332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:15:36.230 [2024-11-18 15:01:59.798343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.799588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.799618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:36.230 [2024-11-18 15:01:59.799626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:15:36.230 [2024-11-18 15:01:59.799633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.803942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.803975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:36.230 [2024-11-18 15:01:59.803983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:15:36.230 [2024-11-18 15:01:59.803990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.804081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.804095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:36.230 [2024-11-18 15:01:59.804102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:36.230 [2024-11-18 15:01:59.804110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.805785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.805814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:36.230 [2024-11-18 15:01:59.805821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:15:36.230 [2024-11-18 15:01:59.805831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.806862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.806890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:36.230 [2024-11-18 15:01:59.806897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:15:36.230 [2024-11-18 15:01:59.806904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.807893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.808012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:36.230 [2024-11-18 15:01:59.808024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:15:36.230 [2024-11-18 15:01:59.808031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.809078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.230 [2024-11-18 15:01:59.809103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:36.230 [2024-11-18 15:01:59.809110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:15:36.230 [2024-11-18 15:01:59.809117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.230 [2024-11-18 15:01:59.809138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:36.230 [2024-11-18 15:01:59.809155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:36.230 [2024-11-18 15:01:59.809350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:36.231 [2024-11-18 15:01:59.809883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:36.231 [2024-11-18 15:01:59.809890] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49a46cc7-8150-4cd6-a4f7-9d4b56b59c4d 00:15:36.231 [2024-11-18 15:01:59.809897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:36.231 [2024-11-18 15:01:59.809902] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:36.231 [2024-11-18 15:01:59.809909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:36.231 [2024-11-18 15:01:59.809916] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:36.231 [2024-11-18 15:01:59.809923] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:36.231 [2024-11-18 15:01:59.809934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:36.231 [2024-11-18 15:01:59.809941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:36.231 [2024-11-18 15:01:59.809945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:36.231 [2024-11-18 15:01:59.809952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:36.231 [2024-11-18 15:01:59.809957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.231 [2024-11-18 15:01:59.809964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:36.231 [2024-11-18 15:01:59.809972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:15:36.231 [2024-11-18 15:01:59.809982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.231 [2024-11-18 15:01:59.811696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.231 [2024-11-18 15:01:59.811718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:36.231 [2024-11-18 15:01:59.811725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:15:36.231 [2024-11-18 15:01:59.811732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.231 [2024-11-18 15:01:59.811791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.231 [2024-11-18 15:01:59.811802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:36.232 [2024-11-18 15:01:59.811809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:36.232 [2024-11-18 15:01:59.811817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.490 [2024-11-18 15:01:59.817813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.490 [2024-11-18 15:01:59.817844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.490 [2024-11-18 15:01:59.817852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.490 [2024-11-18 15:01:59.817859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.490 [2024-11-18 15:01:59.817907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.490 [2024-11-18 15:01:59.817919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.490 [2024-11-18 15:01:59.817926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.490 [2024-11-18 15:01:59.817935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.490 [2024-11-18 15:01:59.817980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.490 [2024-11-18 15:01:59.817989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.490 [2024-11-18 15:01:59.817996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.490 [2024-11-18 15:01:59.818006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.490 [2024-11-18 15:01:59.818019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.818030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.491 [2024-11-18 15:01:59.818038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.818046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.828136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.828174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.491 [2024-11-18 15:01:59.828182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.828190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.491 [2024-11-18 15:01:59.832552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:36.491 [2024-11-18 15:01:59.832636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:36.491 [2024-11-18 15:01:59.832687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:36.491 [2024-11-18 15:01:59.832772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:36.491 [2024-11-18 15:01:59.832823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:36.491 [2024-11-18 15:01:59.832887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.832933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:36.491 [2024-11-18 15:01:59.832944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:36.491 [2024-11-18 15:01:59.832950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:36.491 [2024-11-18 15:01:59.832960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.491 [2024-11-18 15:01:59.833067] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 160.370 ms, result 0 00:15:36.491 true 00:15:36.491 15:01:59 -- ftl/bdevperf.sh@37 -- # killprocess 82819 00:15:36.491 15:01:59 -- common/autotest_common.sh@936 -- # '[' -z 82819 ']' 00:15:36.491 15:01:59 -- common/autotest_common.sh@940 -- # kill -0 82819 00:15:36.491 15:01:59 -- common/autotest_common.sh@941 -- # uname 00:15:36.491 15:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:36.491 15:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82819 00:15:36.491 killing process with pid 82819 00:15:36.491 Received shutdown signal, test time was about 4.000000 seconds 00:15:36.491 00:15:36.491 Latency(us) 00:15:36.491 [2024-11-18T15:02:00.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.491 [2024-11-18T15:02:00.081Z] =================================================================================================================== 00:15:36.491 [2024-11-18T15:02:00.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.491 15:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:36.491 15:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:36.491 15:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82819' 00:15:36.491 15:01:59 -- common/autotest_common.sh@955 -- # kill 82819 00:15:36.491 15:01:59 -- common/autotest_common.sh@960 -- # wait 82819 00:15:41.758 15:02:04 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:41.758 15:02:04 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:41.758 15:02:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.758 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 Remove shared memory files 00:15:41.758 15:02:04 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:41.758 15:02:04 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:41.758 15:02:04 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:41.758 15:02:04 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:41.758 15:02:04 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:41.758 15:02:04 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:41.758 15:02:04 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:41.758 ************************************ 00:15:41.758 END TEST ftl_bdevperf 00:15:41.758 ************************************ 00:15:41.758 00:15:41.758 real 0m23.770s 00:15:41.758 user 0m26.234s 00:15:41.758 sys 0m0.880s 00:15:41.758 15:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:41.758 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 15:02:04 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:41.758 15:02:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.758 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 ************************************ 00:15:41.758 START TEST ftl_trim 00:15:41.758 ************************************ 00:15:41.758 15:02:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:41.758 * Looking for test storage... 00:15:41.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.758 15:02:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:41.758 15:02:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:41.758 15:02:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:41.758 15:02:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:41.758 15:02:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:41.758 15:02:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:41.758 15:02:04 -- scripts/common.sh@335 -- # IFS=.-: 00:15:41.758 15:02:04 -- scripts/common.sh@335 -- # read -ra ver1 00:15:41.758 15:02:04 -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.758 15:02:04 -- scripts/common.sh@336 -- # read -ra ver2 00:15:41.758 15:02:04 -- scripts/common.sh@337 -- # local 'op=<' 00:15:41.758 15:02:04 -- scripts/common.sh@339 -- # ver1_l=2 00:15:41.758 15:02:04 -- scripts/common.sh@340 -- # ver2_l=1 00:15:41.758 15:02:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:41.758 15:02:04 -- scripts/common.sh@343 -- # case "$op" in 00:15:41.758 15:02:04 -- scripts/common.sh@344 -- # : 1 00:15:41.758 15:02:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:41.758 15:02:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.758 15:02:04 -- scripts/common.sh@364 -- # decimal 1 00:15:41.758 15:02:04 -- scripts/common.sh@352 -- # local d=1 00:15:41.758 15:02:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.758 15:02:04 -- scripts/common.sh@354 -- # echo 1 00:15:41.758 15:02:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:41.758 15:02:04 -- scripts/common.sh@365 -- # decimal 2 00:15:41.758 15:02:04 -- scripts/common.sh@352 -- # local d=2 00:15:41.758 15:02:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.758 15:02:04 -- scripts/common.sh@354 -- # echo 2 00:15:41.758 15:02:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:41.758 15:02:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:41.758 15:02:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:41.758 15:02:04 -- scripts/common.sh@367 -- # return 0 00:15:41.758 15:02:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:41.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.758 --rc genhtml_branch_coverage=1 00:15:41.758 --rc genhtml_function_coverage=1 00:15:41.758 --rc genhtml_legend=1 00:15:41.758 --rc geninfo_all_blocks=1 00:15:41.758 --rc geninfo_unexecuted_blocks=1 00:15:41.758 00:15:41.758 ' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:41.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.758 --rc genhtml_branch_coverage=1 00:15:41.758 --rc genhtml_function_coverage=1 00:15:41.758 --rc genhtml_legend=1 00:15:41.758 --rc geninfo_all_blocks=1 00:15:41.758 --rc geninfo_unexecuted_blocks=1 00:15:41.758 00:15:41.758 ' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:41.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.758 --rc genhtml_branch_coverage=1 00:15:41.758 --rc genhtml_function_coverage=1 00:15:41.758 --rc genhtml_legend=1 00:15:41.758 --rc geninfo_all_blocks=1 00:15:41.758 --rc geninfo_unexecuted_blocks=1 00:15:41.758 00:15:41.758 ' 00:15:41.758 15:02:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:41.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.758 --rc genhtml_branch_coverage=1 00:15:41.758 --rc genhtml_function_coverage=1 00:15:41.758 --rc genhtml_legend=1 00:15:41.758 --rc geninfo_all_blocks=1 00:15:41.758 --rc geninfo_unexecuted_blocks=1 00:15:41.758 00:15:41.758 ' 00:15:41.758 15:02:04 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:41.758 15:02:04 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:41.758 15:02:04 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.758 15:02:04 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.758 15:02:04 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:41.758 15:02:04 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.758 15:02:04 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.758 15:02:04 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:41.758 15:02:04 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:41.758 15:02:04 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.758 15:02:04 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.758 15:02:04 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:41.758 15:02:04 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:41.758 15:02:04 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.758 15:02:04 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.758 15:02:04 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:41.758 15:02:04 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:41.759 15:02:04 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.759 15:02:04 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.759 15:02:04 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:41.759 15:02:04 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:41.759 15:02:04 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.759 15:02:04 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.759 15:02:04 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.759 15:02:04 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.759 15:02:04 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:41.759 15:02:04 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:41.759 15:02:04 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.759 15:02:04 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.759 15:02:04 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.759 15:02:04 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:41.759 15:02:04 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:41.759 15:02:04 -- ftl/trim.sh@25 -- # timeout=240 00:15:41.759 15:02:04 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:41.759 15:02:04 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:41.759 15:02:04 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:41.759 15:02:04 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:41.759 15:02:04 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:41.759 15:02:04 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.759 15:02:04 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.759 15:02:04 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:41.759 15:02:04 -- ftl/trim.sh@40 -- # svcpid=83188 00:15:41.759 15:02:04 -- ftl/trim.sh@41 -- # waitforlisten 83188 00:15:41.759 15:02:04 -- common/autotest_common.sh@829 -- # '[' -z 83188 ']' 00:15:41.759 15:02:04 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:41.759 15:02:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.759 15:02:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.759 15:02:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.759 15:02:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.759 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:15:41.759 [2024-11-18 15:02:04.620376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:41.759 [2024-11-18 15:02:04.621065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83188 ] 00:15:41.759 [2024-11-18 15:02:04.766102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.759 [2024-11-18 15:02:04.806646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.759 [2024-11-18 15:02:04.807120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.759 [2024-11-18 15:02:04.807375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.759 [2024-11-18 15:02:04.807418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.018 15:02:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.018 15:02:05 -- common/autotest_common.sh@862 -- # return 0 00:15:42.018 15:02:05 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:42.018 15:02:05 -- ftl/common.sh@54 -- # local name=nvme0 00:15:42.018 15:02:05 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:42.018 15:02:05 -- ftl/common.sh@56 -- # local size=103424 00:15:42.018 15:02:05 -- ftl/common.sh@59 -- # local base_bdev 00:15:42.018 15:02:05 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:42.276 15:02:05 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:42.276 15:02:05 -- ftl/common.sh@62 -- # local base_size 00:15:42.276 15:02:05 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:42.276 15:02:05 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:42.276 15:02:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:42.276 15:02:05 -- common/autotest_common.sh@1369 -- # local bs 00:15:42.276 15:02:05 -- common/autotest_common.sh@1370 -- # local nb 00:15:42.276 15:02:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:42.535 15:02:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:42.535 { 00:15:42.535 "name": "nvme0n1", 00:15:42.535 "aliases": [ 00:15:42.535 "e1df7611-4963-43ab-b7a7-4819a9d9fdab" 00:15:42.535 ], 00:15:42.535 "product_name": "NVMe disk", 00:15:42.535 "block_size": 4096, 00:15:42.535 "num_blocks": 1310720, 00:15:42.535 "uuid": "e1df7611-4963-43ab-b7a7-4819a9d9fdab", 00:15:42.535 "assigned_rate_limits": { 00:15:42.535 "rw_ios_per_sec": 0, 00:15:42.535 "rw_mbytes_per_sec": 0, 00:15:42.535 "r_mbytes_per_sec": 0, 00:15:42.535 "w_mbytes_per_sec": 0 00:15:42.535 }, 00:15:42.535 "claimed": true, 00:15:42.535 "claim_type": "read_many_write_one", 00:15:42.535 "zoned": false, 00:15:42.535 "supported_io_types": { 00:15:42.535 "read": true, 00:15:42.535 "write": true, 00:15:42.535 "unmap": true, 00:15:42.535 "write_zeroes": true, 00:15:42.535 "flush": true, 00:15:42.535 "reset": true, 00:15:42.535 "compare": true, 00:15:42.535 "compare_and_write": false, 00:15:42.535 "abort": true, 00:15:42.535 "nvme_admin": true, 00:15:42.535 "nvme_io": true 00:15:42.535 }, 00:15:42.535 "driver_specific": { 00:15:42.535 "nvme": [ 00:15:42.535 { 00:15:42.535 "pci_address": "0000:00:07.0", 00:15:42.535 "trid": { 00:15:42.535 "trtype": "PCIe", 00:15:42.535 "traddr": "0000:00:07.0" 00:15:42.535 }, 00:15:42.535 "ctrlr_data": { 00:15:42.535 "cntlid": 0, 00:15:42.535 "vendor_id": "0x1b36", 00:15:42.535 "model_number": "QEMU NVMe Ctrl", 00:15:42.535 "serial_number": "12341", 00:15:42.535 "firmware_revision": "8.0.0", 00:15:42.535 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:42.535 "oacs": { 00:15:42.535 "security": 0, 00:15:42.535 "format": 1, 00:15:42.535 "firmware": 0, 00:15:42.535 "ns_manage": 1 00:15:42.535 }, 00:15:42.535 "multi_ctrlr": false, 00:15:42.535 "ana_reporting": false 00:15:42.535 }, 00:15:42.535 "vs": { 00:15:42.535 "nvme_version": "1.4" 00:15:42.535 }, 00:15:42.535 "ns_data": { 00:15:42.535 "id": 1, 00:15:42.535 "can_share": false 00:15:42.535 } 00:15:42.535 } 00:15:42.535 ], 00:15:42.535 "mp_policy": "active_passive" 00:15:42.535 } 00:15:42.535 } 00:15:42.535 ]' 00:15:42.535 15:02:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:42.535 15:02:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:42.535 15:02:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:42.535 15:02:05 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:42.535 15:02:05 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:42.535 15:02:05 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:42.535 15:02:05 -- ftl/common.sh@63 -- # base_size=5120 00:15:42.535 15:02:05 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:42.535 15:02:05 -- ftl/common.sh@67 -- # clear_lvols 00:15:42.535 15:02:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:42.535 15:02:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:42.793 15:02:06 -- ftl/common.sh@28 -- # stores=6d12ca35-7e03-45f4-b24a-43b4d7272ffb 00:15:42.793 15:02:06 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:42.793 15:02:06 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d12ca35-7e03-45f4-b24a-43b4d7272ffb 00:15:42.793 15:02:06 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:43.052 15:02:06 -- ftl/common.sh@68 -- # lvs=8fa4a156-d367-4ea3-bc0d-73e5659e3a6e 00:15:43.052 15:02:06 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fa4a156-d367-4ea3-bc0d-73e5659e3a6e 00:15:43.311 15:02:06 -- ftl/trim.sh@43 -- # split_bdev=b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.311 15:02:06 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.311 15:02:06 -- ftl/common.sh@35 -- # local name=nvc0 00:15:43.311 15:02:06 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:43.311 15:02:06 -- ftl/common.sh@37 -- # local base_bdev=b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.311 15:02:06 -- ftl/common.sh@38 -- # local cache_size= 00:15:43.311 15:02:06 -- ftl/common.sh@41 -- # get_bdev_size b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.311 15:02:06 -- common/autotest_common.sh@1367 -- # local bdev_name=b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.311 15:02:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:43.311 15:02:06 -- common/autotest_common.sh@1369 -- # local bs 00:15:43.311 15:02:06 -- common/autotest_common.sh@1370 -- # local nb 00:15:43.311 15:02:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.570 15:02:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:43.570 { 00:15:43.570 "name": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:43.570 "aliases": [ 00:15:43.570 "lvs/nvme0n1p0" 00:15:43.570 ], 00:15:43.570 "product_name": "Logical Volume", 00:15:43.570 "block_size": 4096, 00:15:43.570 "num_blocks": 26476544, 00:15:43.570 "uuid": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:43.570 "assigned_rate_limits": { 00:15:43.570 "rw_ios_per_sec": 0, 00:15:43.570 "rw_mbytes_per_sec": 0, 00:15:43.570 "r_mbytes_per_sec": 0, 00:15:43.570 "w_mbytes_per_sec": 0 00:15:43.570 }, 00:15:43.570 "claimed": false, 00:15:43.570 "zoned": false, 00:15:43.570 "supported_io_types": { 00:15:43.570 "read": true, 00:15:43.570 "write": true, 00:15:43.570 "unmap": true, 00:15:43.570 "write_zeroes": true, 00:15:43.570 "flush": false, 00:15:43.570 "reset": true, 00:15:43.570 "compare": false, 00:15:43.570 "compare_and_write": false, 00:15:43.570 "abort": false, 00:15:43.570 "nvme_admin": false, 00:15:43.570 "nvme_io": false 00:15:43.570 }, 00:15:43.570 "driver_specific": { 00:15:43.570 "lvol": { 00:15:43.570 "lvol_store_uuid": "8fa4a156-d367-4ea3-bc0d-73e5659e3a6e", 00:15:43.570 "base_bdev": "nvme0n1", 00:15:43.570 "thin_provision": true, 00:15:43.570 "snapshot": false, 00:15:43.570 "clone": false, 00:15:43.570 "esnap_clone": false 00:15:43.570 } 00:15:43.570 } 00:15:43.570 } 00:15:43.570 ]' 00:15:43.570 15:02:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:43.570 15:02:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:43.570 15:02:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:43.570 15:02:07 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:43.570 15:02:07 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:43.570 15:02:07 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:43.570 15:02:07 -- ftl/common.sh@41 -- # local base_size=5171 00:15:43.570 15:02:07 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:43.570 15:02:07 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:43.828 15:02:07 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:43.828 15:02:07 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:43.828 15:02:07 -- ftl/common.sh@48 -- # get_bdev_size b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.828 15:02:07 -- common/autotest_common.sh@1367 -- # local bdev_name=b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:43.828 15:02:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:43.828 15:02:07 -- common/autotest_common.sh@1369 -- # local bs 00:15:43.828 15:02:07 -- common/autotest_common.sh@1370 -- # local nb 00:15:43.828 15:02:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:44.086 15:02:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:44.086 { 00:15:44.086 "name": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:44.086 "aliases": [ 00:15:44.087 "lvs/nvme0n1p0" 00:15:44.087 ], 00:15:44.087 "product_name": "Logical Volume", 00:15:44.087 "block_size": 4096, 00:15:44.087 "num_blocks": 26476544, 00:15:44.087 "uuid": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:44.087 "assigned_rate_limits": { 00:15:44.087 "rw_ios_per_sec": 0, 00:15:44.087 "rw_mbytes_per_sec": 0, 00:15:44.087 "r_mbytes_per_sec": 0, 00:15:44.087 "w_mbytes_per_sec": 0 00:15:44.087 }, 00:15:44.087 "claimed": false, 00:15:44.087 "zoned": false, 00:15:44.087 "supported_io_types": { 00:15:44.087 "read": true, 00:15:44.087 "write": true, 00:15:44.087 "unmap": true, 00:15:44.087 "write_zeroes": true, 00:15:44.087 "flush": false, 00:15:44.087 "reset": true, 00:15:44.087 "compare": false, 00:15:44.087 "compare_and_write": false, 00:15:44.087 "abort": false, 00:15:44.087 "nvme_admin": false, 00:15:44.087 "nvme_io": false 00:15:44.087 }, 00:15:44.087 "driver_specific": { 00:15:44.087 "lvol": { 00:15:44.087 "lvol_store_uuid": "8fa4a156-d367-4ea3-bc0d-73e5659e3a6e", 00:15:44.087 "base_bdev": "nvme0n1", 00:15:44.087 "thin_provision": true, 00:15:44.087 "snapshot": false, 00:15:44.087 "clone": false, 00:15:44.087 "esnap_clone": false 00:15:44.087 } 00:15:44.087 } 00:15:44.087 } 00:15:44.087 ]' 00:15:44.087 15:02:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:44.087 15:02:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:44.087 15:02:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:44.087 15:02:07 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:44.087 15:02:07 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:44.087 15:02:07 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:44.087 15:02:07 -- ftl/common.sh@48 -- # cache_size=5171 00:15:44.087 15:02:07 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:44.414 15:02:07 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:44.414 15:02:07 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:44.414 15:02:07 -- ftl/trim.sh@47 -- # get_bdev_size b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:44.414 15:02:07 -- common/autotest_common.sh@1367 -- # local bdev_name=b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:44.414 15:02:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:44.414 15:02:07 -- common/autotest_common.sh@1369 -- # local bs 00:15:44.414 15:02:07 -- common/autotest_common.sh@1370 -- # local nb 00:15:44.414 15:02:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b416e18f-24e7-4e58-956b-6e0d09e4cca9 00:15:44.414 15:02:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:44.414 { 00:15:44.414 "name": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:44.414 "aliases": [ 00:15:44.414 "lvs/nvme0n1p0" 00:15:44.414 ], 00:15:44.414 "product_name": "Logical Volume", 00:15:44.414 "block_size": 4096, 00:15:44.414 "num_blocks": 26476544, 00:15:44.414 "uuid": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:44.414 "assigned_rate_limits": { 00:15:44.414 "rw_ios_per_sec": 0, 00:15:44.414 "rw_mbytes_per_sec": 0, 00:15:44.414 "r_mbytes_per_sec": 0, 00:15:44.414 "w_mbytes_per_sec": 0 00:15:44.414 }, 00:15:44.414 "claimed": false, 00:15:44.414 "zoned": false, 00:15:44.414 "supported_io_types": { 00:15:44.414 "read": true, 00:15:44.414 "write": true, 00:15:44.414 "unmap": true, 00:15:44.414 "write_zeroes": true, 00:15:44.414 "flush": false, 00:15:44.414 "reset": true, 00:15:44.414 "compare": false, 00:15:44.414 "compare_and_write": false, 00:15:44.414 "abort": false, 00:15:44.414 "nvme_admin": false, 00:15:44.414 "nvme_io": false 00:15:44.414 }, 00:15:44.415 "driver_specific": { 00:15:44.415 "lvol": { 00:15:44.415 "lvol_store_uuid": "8fa4a156-d367-4ea3-bc0d-73e5659e3a6e", 00:15:44.415 "base_bdev": "nvme0n1", 00:15:44.415 "thin_provision": true, 00:15:44.415 "snapshot": false, 00:15:44.415 "clone": false, 00:15:44.415 "esnap_clone": false 00:15:44.415 } 00:15:44.415 } 00:15:44.415 } 00:15:44.415 ]' 00:15:44.415 15:02:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:44.415 15:02:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:44.415 15:02:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:44.415 15:02:07 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:44.415 15:02:07 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:44.415 15:02:07 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:44.415 15:02:07 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:44.415 15:02:07 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b416e18f-24e7-4e58-956b-6e0d09e4cca9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:44.694 [2024-11-18 15:02:08.144650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.144708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:44.694 [2024-11-18 15:02:08.144723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:44.694 [2024-11-18 15:02:08.144741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.146843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.147017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:44.694 [2024-11-18 15:02:08.147037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:15:44.694 [2024-11-18 15:02:08.147054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.147138] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:44.694 [2024-11-18 15:02:08.147382] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:44.694 [2024-11-18 15:02:08.147397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.147404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:44.694 [2024-11-18 15:02:08.147413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:15:44.694 [2024-11-18 15:02:08.147419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.147507] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 33d3673d-fe43-4b75-9544-654cede6e0bc 00:15:44.694 [2024-11-18 15:02:08.148788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.148818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:44.694 [2024-11-18 15:02:08.148826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:44.694 [2024-11-18 15:02:08.148834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.155650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.155680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:44.694 [2024-11-18 15:02:08.155690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.741 ms 00:15:44.694 [2024-11-18 15:02:08.155700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.155809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.155820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:44.694 [2024-11-18 15:02:08.155827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:44.694 [2024-11-18 15:02:08.155849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.155882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.155890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:44.694 [2024-11-18 15:02:08.155896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:44.694 [2024-11-18 15:02:08.155904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.155936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:44.694 [2024-11-18 15:02:08.157555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.157693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:44.694 [2024-11-18 15:02:08.157718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:15:44.694 [2024-11-18 15:02:08.157732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.157780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.157788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:44.694 [2024-11-18 15:02:08.157799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:44.694 [2024-11-18 15:02:08.157809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.157837] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:44.694 [2024-11-18 15:02:08.157931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:44.694 [2024-11-18 15:02:08.157943] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:44.694 [2024-11-18 15:02:08.157962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:44.694 [2024-11-18 15:02:08.157973] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:44.694 [2024-11-18 15:02:08.157981] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:44.694 [2024-11-18 15:02:08.157989] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:44.694 [2024-11-18 15:02:08.157996] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:44.694 [2024-11-18 15:02:08.158012] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:44.694 [2024-11-18 15:02:08.158018] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:44.694 [2024-11-18 15:02:08.158026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.158032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:44.694 [2024-11-18 15:02:08.158040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:15:44.694 [2024-11-18 15:02:08.158046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.158109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.694 [2024-11-18 15:02:08.158116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:44.694 [2024-11-18 15:02:08.158133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:44.694 [2024-11-18 15:02:08.158138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.694 [2024-11-18 15:02:08.158238] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:44.695 [2024-11-18 15:02:08.158246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:44.695 [2024-11-18 15:02:08.158256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:44.695 [2024-11-18 15:02:08.158277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:44.695 [2024-11-18 15:02:08.158299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.695 [2024-11-18 15:02:08.158312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:44.695 [2024-11-18 15:02:08.158333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:44.695 [2024-11-18 15:02:08.158343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:44.695 [2024-11-18 15:02:08.158350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:44.695 [2024-11-18 15:02:08.158358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:44.695 [2024-11-18 15:02:08.158365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:44.695 [2024-11-18 15:02:08.158378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:44.695 [2024-11-18 15:02:08.158386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:44.695 [2024-11-18 15:02:08.158400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:44.695 [2024-11-18 15:02:08.158406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:44.695 [2024-11-18 15:02:08.158421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:44.695 [2024-11-18 15:02:08.158441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:44.695 [2024-11-18 15:02:08.158467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:44.695 [2024-11-18 15:02:08.158488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:44.695 [2024-11-18 15:02:08.158508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.695 [2024-11-18 15:02:08.158522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:44.695 [2024-11-18 15:02:08.158534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:44.695 [2024-11-18 15:02:08.158540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:44.695 [2024-11-18 15:02:08.158549] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:44.695 [2024-11-18 15:02:08.158555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:44.695 [2024-11-18 15:02:08.158564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:44.695 [2024-11-18 15:02:08.158582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:44.695 [2024-11-18 15:02:08.158589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:44.695 [2024-11-18 15:02:08.158596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:44.695 [2024-11-18 15:02:08.158610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:44.695 [2024-11-18 15:02:08.158617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:44.695 [2024-11-18 15:02:08.158624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:44.695 [2024-11-18 15:02:08.158632] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:44.695 [2024-11-18 15:02:08.158640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.695 [2024-11-18 15:02:08.158649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:44.695 [2024-11-18 15:02:08.158655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:44.695 [2024-11-18 15:02:08.158662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:44.695 [2024-11-18 15:02:08.158668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:44.695 [2024-11-18 15:02:08.158675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:44.695 [2024-11-18 15:02:08.158680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:44.695 [2024-11-18 15:02:08.158687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:44.695 [2024-11-18 15:02:08.158694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:44.695 [2024-11-18 15:02:08.158706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:44.695 [2024-11-18 15:02:08.158711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:44.695 [2024-11-18 15:02:08.158718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:44.695 [2024-11-18 15:02:08.158723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:44.695 [2024-11-18 15:02:08.158730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:44.695 [2024-11-18 15:02:08.158736] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:44.695 [2024-11-18 15:02:08.158746] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:44.695 [2024-11-18 15:02:08.158752] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:44.695 [2024-11-18 15:02:08.158758] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:44.695 [2024-11-18 15:02:08.158763] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:44.695 [2024-11-18 15:02:08.158770] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:44.695 [2024-11-18 15:02:08.158777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.158784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:44.695 [2024-11-18 15:02:08.158789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:15:44.695 [2024-11-18 15:02:08.158796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.165954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.165986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:44.695 [2024-11-18 15:02:08.165995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.072 ms 00:15:44.695 [2024-11-18 15:02:08.166003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.166102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.166113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:44.695 [2024-11-18 15:02:08.166120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:44.695 [2024-11-18 15:02:08.166129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.176677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.176823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:44.695 [2024-11-18 15:02:08.176835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:15:44.695 [2024-11-18 15:02:08.176844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.176906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.176915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:44.695 [2024-11-18 15:02:08.176922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:44.695 [2024-11-18 15:02:08.176929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.177356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.177374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:44.695 [2024-11-18 15:02:08.177383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:15:44.695 [2024-11-18 15:02:08.177392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.177497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.695 [2024-11-18 15:02:08.177517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:44.695 [2024-11-18 15:02:08.177524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:15:44.695 [2024-11-18 15:02:08.177543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.695 [2024-11-18 15:02:08.193641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.696 [2024-11-18 15:02:08.193689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:44.696 [2024-11-18 15:02:08.193703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.068 ms 00:15:44.696 [2024-11-18 15:02:08.193727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.696 [2024-11-18 15:02:08.203293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:44.696 [2024-11-18 15:02:08.220080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.696 [2024-11-18 15:02:08.220111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:44.696 [2024-11-18 15:02:08.220125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.214 ms 00:15:44.696 [2024-11-18 15:02:08.220131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.954 [2024-11-18 15:02:08.287237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.954 [2024-11-18 15:02:08.287386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:44.954 [2024-11-18 15:02:08.287430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.013 ms 00:15:44.954 [2024-11-18 15:02:08.287455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.954 [2024-11-18 15:02:08.287587] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:44.954 [2024-11-18 15:02:08.287623] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:47.485 [2024-11-18 15:02:10.645616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.645717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:47.485 [2024-11-18 15:02:10.645745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2358.011 ms 00:15:47.485 [2024-11-18 15:02:10.645759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.646142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.646163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.485 [2024-11-18 15:02:10.646182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:15:47.485 [2024-11-18 15:02:10.646195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.649825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.649873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:47.485 [2024-11-18 15:02:10.649890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.560 ms 00:15:47.485 [2024-11-18 15:02:10.649899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.652374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.652584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:47.485 [2024-11-18 15:02:10.652606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.420 ms 00:15:47.485 [2024-11-18 15:02:10.652615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.653035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.653072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:47.485 [2024-11-18 15:02:10.653086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:47.485 [2024-11-18 15:02:10.653095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.675402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.675568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:47.485 [2024-11-18 15:02:10.675595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.268 ms 00:15:47.485 [2024-11-18 15:02:10.675603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.680690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.680736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:47.485 [2024-11-18 15:02:10.680765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:15:47.485 [2024-11-18 15:02:10.680773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.685156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.685190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:47.485 [2024-11-18 15:02:10.685205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:15:47.485 [2024-11-18 15:02:10.685213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.688901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.688933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.485 [2024-11-18 15:02:10.688945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:15:47.485 [2024-11-18 15:02:10.688952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.689005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.689014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.485 [2024-11-18 15:02:10.689025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:47.485 [2024-11-18 15:02:10.689035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.689201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.485 [2024-11-18 15:02:10.689211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.485 [2024-11-18 15:02:10.689223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:47.485 [2024-11-18 15:02:10.689230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.485 [2024-11-18 15:02:10.690455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.485 [2024-11-18 15:02:10.691653] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2545.467 ms, result 0 00:15:47.485 [2024-11-18 15:02:10.692388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.485 { 00:15:47.485 "name": "ftl0", 00:15:47.485 "uuid": "33d3673d-fe43-4b75-9544-654cede6e0bc" 00:15:47.485 } 00:15:47.485 15:02:10 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:47.485 15:02:10 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:47.485 15:02:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:47.485 15:02:10 -- common/autotest_common.sh@899 -- # local i 00:15:47.485 15:02:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:47.485 15:02:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:47.485 15:02:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:47.485 15:02:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:47.743 [ 00:15:47.743 { 00:15:47.743 "name": "ftl0", 00:15:47.743 "aliases": [ 00:15:47.743 "33d3673d-fe43-4b75-9544-654cede6e0bc" 00:15:47.743 ], 00:15:47.743 "product_name": "FTL disk", 00:15:47.743 "block_size": 4096, 00:15:47.743 "num_blocks": 23592960, 00:15:47.743 "uuid": "33d3673d-fe43-4b75-9544-654cede6e0bc", 00:15:47.743 "assigned_rate_limits": { 00:15:47.743 "rw_ios_per_sec": 0, 00:15:47.743 "rw_mbytes_per_sec": 0, 00:15:47.743 "r_mbytes_per_sec": 0, 00:15:47.743 "w_mbytes_per_sec": 0 00:15:47.743 }, 00:15:47.743 "claimed": false, 00:15:47.743 "zoned": false, 00:15:47.743 "supported_io_types": { 00:15:47.743 "read": true, 00:15:47.743 "write": true, 00:15:47.743 "unmap": true, 00:15:47.743 "write_zeroes": true, 00:15:47.743 "flush": true, 00:15:47.743 "reset": false, 00:15:47.743 "compare": false, 00:15:47.743 "compare_and_write": false, 00:15:47.744 "abort": false, 00:15:47.744 "nvme_admin": false, 00:15:47.744 "nvme_io": false 00:15:47.744 }, 00:15:47.744 "driver_specific": { 00:15:47.744 "ftl": { 00:15:47.744 "base_bdev": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:47.744 "cache": "nvc0n1p0" 00:15:47.744 } 00:15:47.744 } 00:15:47.744 } 00:15:47.744 ] 00:15:47.744 15:02:11 -- common/autotest_common.sh@905 -- # return 0 00:15:47.744 15:02:11 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:47.744 15:02:11 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:47.744 15:02:11 -- ftl/trim.sh@56 -- # echo ']}' 00:15:47.744 15:02:11 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:48.001 15:02:11 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:48.001 { 00:15:48.001 "name": "ftl0", 00:15:48.001 "aliases": [ 00:15:48.001 "33d3673d-fe43-4b75-9544-654cede6e0bc" 00:15:48.001 ], 00:15:48.001 "product_name": "FTL disk", 00:15:48.001 "block_size": 4096, 00:15:48.001 "num_blocks": 23592960, 00:15:48.001 "uuid": "33d3673d-fe43-4b75-9544-654cede6e0bc", 00:15:48.001 "assigned_rate_limits": { 00:15:48.001 "rw_ios_per_sec": 0, 00:15:48.001 "rw_mbytes_per_sec": 0, 00:15:48.001 "r_mbytes_per_sec": 0, 00:15:48.001 "w_mbytes_per_sec": 0 00:15:48.001 }, 00:15:48.001 "claimed": false, 00:15:48.001 "zoned": false, 00:15:48.001 "supported_io_types": { 00:15:48.001 "read": true, 00:15:48.001 "write": true, 00:15:48.001 "unmap": true, 00:15:48.001 "write_zeroes": true, 00:15:48.001 "flush": true, 00:15:48.001 "reset": false, 00:15:48.001 "compare": false, 00:15:48.001 "compare_and_write": false, 00:15:48.001 "abort": false, 00:15:48.001 "nvme_admin": false, 00:15:48.001 "nvme_io": false 00:15:48.001 }, 00:15:48.001 "driver_specific": { 00:15:48.001 "ftl": { 00:15:48.001 "base_bdev": "b416e18f-24e7-4e58-956b-6e0d09e4cca9", 00:15:48.001 "cache": "nvc0n1p0" 00:15:48.001 } 00:15:48.001 } 00:15:48.001 } 00:15:48.001 ]' 00:15:48.001 15:02:11 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:48.001 15:02:11 -- ftl/trim.sh@60 -- # nb=23592960 00:15:48.001 15:02:11 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:48.264 [2024-11-18 15:02:11.680967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.681036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:48.264 [2024-11-18 15:02:11.681050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.264 [2024-11-18 15:02:11.681060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.681100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:48.264 [2024-11-18 15:02:11.681700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.681781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:48.264 [2024-11-18 15:02:11.681810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:15:48.264 [2024-11-18 15:02:11.681818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.682441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.682454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:48.264 [2024-11-18 15:02:11.682469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:15:48.264 [2024-11-18 15:02:11.682480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.686155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.686178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:48.264 [2024-11-18 15:02:11.686190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:15:48.264 [2024-11-18 15:02:11.686197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.693147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.693305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:48.264 [2024-11-18 15:02:11.693340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:15:48.264 [2024-11-18 15:02:11.693348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.695468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.695514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:48.264 [2024-11-18 15:02:11.695528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:15:48.264 [2024-11-18 15:02:11.695536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.700374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.700510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:48.264 [2024-11-18 15:02:11.700536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:15:48.264 [2024-11-18 15:02:11.700545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.700747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.700758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:48.264 [2024-11-18 15:02:11.700768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:15:48.264 [2024-11-18 15:02:11.700778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.702962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.702994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:48.264 [2024-11-18 15:02:11.703005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:15:48.264 [2024-11-18 15:02:11.703012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.704614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.704644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:48.264 [2024-11-18 15:02:11.704656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:15:48.264 [2024-11-18 15:02:11.704664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.705712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.705742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:48.264 [2024-11-18 15:02:11.705754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:15:48.264 [2024-11-18 15:02:11.705761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.707006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.264 [2024-11-18 15:02:11.707035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:48.264 [2024-11-18 15:02:11.707046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:15:48.264 [2024-11-18 15:02:11.707052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.264 [2024-11-18 15:02:11.707094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:48.264 [2024-11-18 15:02:11.707110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:48.264 [2024-11-18 15:02:11.707447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.707994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.708004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.708011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.708021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:48.265 [2024-11-18 15:02:11.708042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:48.265 [2024-11-18 15:02:11.708059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:15:48.265 [2024-11-18 15:02:11.708068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:48.265 [2024-11-18 15:02:11.708077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:48.265 [2024-11-18 15:02:11.708084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:48.265 [2024-11-18 15:02:11.708108] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:48.265 [2024-11-18 15:02:11.708125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:48.265 [2024-11-18 15:02:11.708135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:48.265 [2024-11-18 15:02:11.708145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:48.265 [2024-11-18 15:02:11.708154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:48.265 [2024-11-18 15:02:11.708160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:48.266 [2024-11-18 15:02:11.708168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.266 [2024-11-18 15:02:11.708175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:48.266 [2024-11-18 15:02:11.708186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:15:48.266 [2024-11-18 15:02:11.708193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.710141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.266 [2024-11-18 15:02:11.710174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:48.266 [2024-11-18 15:02:11.710186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:15:48.266 [2024-11-18 15:02:11.710198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.710295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.266 [2024-11-18 15:02:11.710305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:48.266 [2024-11-18 15:02:11.710331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:48.266 [2024-11-18 15:02:11.710341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.717080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.717200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.266 [2024-11-18 15:02:11.717270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.717381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.717507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.717534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.266 [2024-11-18 15:02:11.717628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.717655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.717776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.717810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.266 [2024-11-18 15:02:11.717834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.717891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.717942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.717969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.266 [2024-11-18 15:02:11.717995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.718052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.730755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.730911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.266 [2024-11-18 15:02:11.730987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.731012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.735969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.736086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.266 [2024-11-18 15:02:11.736178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.736202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.736340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.736374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.266 [2024-11-18 15:02:11.736430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.736455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.736529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.736602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.266 [2024-11-18 15:02:11.736632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.736653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.736760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.736792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.266 [2024-11-18 15:02:11.736826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.736835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.736905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.736918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:48.266 [2024-11-18 15:02:11.736928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.736936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.736999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.737030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.266 [2024-11-18 15:02:11.737041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.737057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.737127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:48.266 [2024-11-18 15:02:11.737139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.266 [2024-11-18 15:02:11.737152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:48.266 [2024-11-18 15:02:11.737159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.266 [2024-11-18 15:02:11.737384] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.384 ms, result 0 00:15:48.266 true 00:15:48.266 15:02:11 -- ftl/trim.sh@63 -- # killprocess 83188 00:15:48.266 15:02:11 -- common/autotest_common.sh@936 -- # '[' -z 83188 ']' 00:15:48.266 15:02:11 -- common/autotest_common.sh@940 -- # kill -0 83188 00:15:48.266 15:02:11 -- common/autotest_common.sh@941 -- # uname 00:15:48.266 15:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.266 15:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83188 00:15:48.266 killing process with pid 83188 00:15:48.266 15:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.266 15:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.266 15:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83188' 00:15:48.266 15:02:11 -- common/autotest_common.sh@955 -- # kill 83188 00:15:48.266 15:02:11 -- common/autotest_common.sh@960 -- # wait 83188 00:15:53.532 15:02:16 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:53.790 65536+0 records in 00:15:53.790 65536+0 records out 00:15:53.790 268435456 bytes (268 MB, 256 MiB) copied, 1.06722 s, 252 MB/s 00:15:53.790 15:02:17 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:53.790 [2024-11-18 15:02:17.374583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.790 [2024-11-18 15:02:17.374719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83392 ] 00:15:54.048 [2024-11-18 15:02:17.521202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.048 [2024-11-18 15:02:17.549941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.048 [2024-11-18 15:02:17.630905] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:54.048 [2024-11-18 15:02:17.630970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:54.308 [2024-11-18 15:02:17.773497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.773552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:54.308 [2024-11-18 15:02:17.773567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:54.308 [2024-11-18 15:02:17.773574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.775296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.775477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:54.308 [2024-11-18 15:02:17.775491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:15:54.308 [2024-11-18 15:02:17.775498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.775564] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:54.308 [2024-11-18 15:02:17.775754] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:54.308 [2024-11-18 15:02:17.775765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.775774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:54.308 [2024-11-18 15:02:17.775782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:15:54.308 [2024-11-18 15:02:17.775788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.776768] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:54.308 [2024-11-18 15:02:17.778840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.778965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:54.308 [2024-11-18 15:02:17.778977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:15:54.308 [2024-11-18 15:02:17.778983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.779029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.779037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:54.308 [2024-11-18 15:02:17.779043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:54.308 [2024-11-18 15:02:17.779051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.783423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.783454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:54.308 [2024-11-18 15:02:17.783462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:15:54.308 [2024-11-18 15:02:17.783467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.783536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.783543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:54.308 [2024-11-18 15:02:17.783554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:54.308 [2024-11-18 15:02:17.783559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.783576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.783585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:54.308 [2024-11-18 15:02:17.783591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:54.308 [2024-11-18 15:02:17.783596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.783621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:54.308 [2024-11-18 15:02:17.784815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.784840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:54.308 [2024-11-18 15:02:17.784847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:15:54.308 [2024-11-18 15:02:17.784853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.784885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.308 [2024-11-18 15:02:17.784896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:54.308 [2024-11-18 15:02:17.784905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:54.308 [2024-11-18 15:02:17.784911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.308 [2024-11-18 15:02:17.784925] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:54.308 [2024-11-18 15:02:17.784939] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:54.308 [2024-11-18 15:02:17.784967] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:54.308 [2024-11-18 15:02:17.784981] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:54.308 [2024-11-18 15:02:17.785037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:54.308 [2024-11-18 15:02:17.785046] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:54.308 [2024-11-18 15:02:17.785054] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:54.308 [2024-11-18 15:02:17.785062] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:54.308 [2024-11-18 15:02:17.785069] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785075] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:54.309 [2024-11-18 15:02:17.785080] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:54.309 [2024-11-18 15:02:17.785090] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:54.309 [2024-11-18 15:02:17.785098] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:54.309 [2024-11-18 15:02:17.785107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.309 [2024-11-18 15:02:17.785113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:54.309 [2024-11-18 15:02:17.785119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:15:54.309 [2024-11-18 15:02:17.785125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.309 [2024-11-18 15:02:17.785174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.309 [2024-11-18 15:02:17.785181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:54.309 [2024-11-18 15:02:17.785187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:54.309 [2024-11-18 15:02:17.785192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.309 [2024-11-18 15:02:17.785253] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:54.309 [2024-11-18 15:02:17.785260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:54.309 [2024-11-18 15:02:17.785266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:54.309 [2024-11-18 15:02:17.785283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:54.309 [2024-11-18 15:02:17.785299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:54.309 [2024-11-18 15:02:17.785310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:54.309 [2024-11-18 15:02:17.785337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:54.309 [2024-11-18 15:02:17.785342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:54.309 [2024-11-18 15:02:17.785347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:54.309 [2024-11-18 15:02:17.785352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:54.309 [2024-11-18 15:02:17.785357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:54.309 [2024-11-18 15:02:17.785370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:54.309 [2024-11-18 15:02:17.785374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:54.309 [2024-11-18 15:02:17.785385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:54.309 [2024-11-18 15:02:17.785390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:54.309 [2024-11-18 15:02:17.785400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:54.309 [2024-11-18 15:02:17.785415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:54.309 [2024-11-18 15:02:17.785430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:54.309 [2024-11-18 15:02:17.785453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:54.309 [2024-11-18 15:02:17.785469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:54.309 [2024-11-18 15:02:17.785480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:54.309 [2024-11-18 15:02:17.785486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:54.309 [2024-11-18 15:02:17.785492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:54.309 [2024-11-18 15:02:17.785497] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:54.309 [2024-11-18 15:02:17.785503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:54.309 [2024-11-18 15:02:17.785510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:54.309 [2024-11-18 15:02:17.785522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:54.309 [2024-11-18 15:02:17.785529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:54.309 [2024-11-18 15:02:17.785535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:54.309 [2024-11-18 15:02:17.785541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:54.309 [2024-11-18 15:02:17.785548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:54.309 [2024-11-18 15:02:17.785554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:54.309 [2024-11-18 15:02:17.785560] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:54.309 [2024-11-18 15:02:17.785567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:54.309 [2024-11-18 15:02:17.785575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:54.309 [2024-11-18 15:02:17.785580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:54.309 [2024-11-18 15:02:17.785586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:54.309 [2024-11-18 15:02:17.785591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:54.309 [2024-11-18 15:02:17.785596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:54.309 [2024-11-18 15:02:17.785601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:54.309 [2024-11-18 15:02:17.785606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:54.309 [2024-11-18 15:02:17.785611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:54.309 [2024-11-18 15:02:17.785616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:54.309 [2024-11-18 15:02:17.785622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:54.309 [2024-11-18 15:02:17.785627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:54.309 [2024-11-18 15:02:17.785632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:54.309 [2024-11-18 15:02:17.785638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:54.309 [2024-11-18 15:02:17.785643] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:54.309 [2024-11-18 15:02:17.785649] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:54.309 [2024-11-18 15:02:17.785654] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:54.309 [2024-11-18 15:02:17.785660] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:54.309 [2024-11-18 15:02:17.785665] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:54.309 [2024-11-18 15:02:17.785670] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:54.309 [2024-11-18 15:02:17.785675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.309 [2024-11-18 15:02:17.785681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:54.309 [2024-11-18 15:02:17.785686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:15:54.309 [2024-11-18 15:02:17.785692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.309 [2024-11-18 15:02:17.791051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.309 [2024-11-18 15:02:17.791080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.309 [2024-11-18 15:02:17.791088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:15:54.309 [2024-11-18 15:02:17.791098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.309 [2024-11-18 15:02:17.791193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.309 [2024-11-18 15:02:17.791204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:54.309 [2024-11-18 15:02:17.791212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:54.309 [2024-11-18 15:02:17.791218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.309 [2024-11-18 15:02:17.810778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.811035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.310 [2024-11-18 15:02:17.811068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.538 ms 00:15:54.310 [2024-11-18 15:02:17.811085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.811195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.811212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.310 [2024-11-18 15:02:17.811224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:54.310 [2024-11-18 15:02:17.811236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.811644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.811676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.310 [2024-11-18 15:02:17.811691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:15:54.310 [2024-11-18 15:02:17.811707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.811887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.811901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.310 [2024-11-18 15:02:17.811913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:15:54.310 [2024-11-18 15:02:17.811929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.817541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.817661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.310 [2024-11-18 15:02:17.817673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.577 ms 00:15:54.310 [2024-11-18 15:02:17.817680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.819888] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:54.310 [2024-11-18 15:02:17.819919] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:54.310 [2024-11-18 15:02:17.819927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.819934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:54.310 [2024-11-18 15:02:17.819940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:15:54.310 [2024-11-18 15:02:17.819951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.831007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.831038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:54.310 [2024-11-18 15:02:17.831046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.019 ms 00:15:54.310 [2024-11-18 15:02:17.831053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.832658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.832765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:54.310 [2024-11-18 15:02:17.832777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:15:54.310 [2024-11-18 15:02:17.832783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.834005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.834031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:54.310 [2024-11-18 15:02:17.834038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:15:54.310 [2024-11-18 15:02:17.834043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.834211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.834219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:54.310 [2024-11-18 15:02:17.834226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:15:54.310 [2024-11-18 15:02:17.834232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.849660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.849709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:54.310 [2024-11-18 15:02:17.849719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.411 ms 00:15:54.310 [2024-11-18 15:02:17.849725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.855559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:54.310 [2024-11-18 15:02:17.867898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.867935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:54.310 [2024-11-18 15:02:17.867946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.087 ms 00:15:54.310 [2024-11-18 15:02:17.867952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.868026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.868039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:54.310 [2024-11-18 15:02:17.868048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:54.310 [2024-11-18 15:02:17.868054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.868090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.868098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:54.310 [2024-11-18 15:02:17.868104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:54.310 [2024-11-18 15:02:17.868109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.869064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.869095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:54.310 [2024-11-18 15:02:17.869102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:15:54.310 [2024-11-18 15:02:17.869115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.869141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.869147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:54.310 [2024-11-18 15:02:17.869153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:54.310 [2024-11-18 15:02:17.869161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.869190] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:54.310 [2024-11-18 15:02:17.869197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.869202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:54.310 [2024-11-18 15:02:17.869208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:54.310 [2024-11-18 15:02:17.869213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.872471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.872501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:54.310 [2024-11-18 15:02:17.872509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:15:54.310 [2024-11-18 15:02:17.872515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.872573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.310 [2024-11-18 15:02:17.872581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:54.310 [2024-11-18 15:02:17.872588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:54.310 [2024-11-18 15:02:17.872594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.310 [2024-11-18 15:02:17.873214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:54.310 [2024-11-18 15:02:17.874026] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.505 ms, result 0 00:15:54.310 [2024-11-18 15:02:17.874763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:54.310 [2024-11-18 15:02:17.884649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:55.685  [2024-11-18T15:02:20.209Z] Copying: 42/256 [MB] (42 MBps) [2024-11-18T15:02:21.144Z] Copying: 83/256 [MB] (41 MBps) [2024-11-18T15:02:22.079Z] Copying: 125/256 [MB] (41 MBps) [2024-11-18T15:02:23.012Z] Copying: 172/256 [MB] (47 MBps) [2024-11-18T15:02:23.948Z] Copying: 221/256 [MB] (48 MBps) [2024-11-18T15:02:23.948Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-18 15:02:23.665603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:00.358 [2024-11-18 15:02:23.667055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.667096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:00.358 [2024-11-18 15:02:23.667111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:00.358 [2024-11-18 15:02:23.667126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.667148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:00.358 [2024-11-18 15:02:23.667706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.667731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:00.358 [2024-11-18 15:02:23.667750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:16:00.358 [2024-11-18 15:02:23.667761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.669284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.669424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:00.358 [2024-11-18 15:02:23.669441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:16:00.358 [2024-11-18 15:02:23.669451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.676302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.676410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:00.358 [2024-11-18 15:02:23.676482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.832 ms 00:16:00.358 [2024-11-18 15:02:23.676548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.683435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.683539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:00.358 [2024-11-18 15:02:23.683603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.815 ms 00:16:00.358 [2024-11-18 15:02:23.683626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.685270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.685392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:00.358 [2024-11-18 15:02:23.685451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:16:00.358 [2024-11-18 15:02:23.685472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.689268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.689384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:00.358 [2024-11-18 15:02:23.689446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:16:00.358 [2024-11-18 15:02:23.689496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.689645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.689706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:00.358 [2024-11-18 15:02:23.689760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:00.358 [2024-11-18 15:02:23.689781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.691547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.691643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:00.358 [2024-11-18 15:02:23.691693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:16:00.358 [2024-11-18 15:02:23.691713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.693189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.693282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:00.358 [2024-11-18 15:02:23.693341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:16:00.358 [2024-11-18 15:02:23.693363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.694279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.694387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:00.358 [2024-11-18 15:02:23.694435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:16:00.358 [2024-11-18 15:02:23.694456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.695596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.358 [2024-11-18 15:02:23.695688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:00.358 [2024-11-18 15:02:23.695736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:16:00.358 [2024-11-18 15:02:23.695757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.358 [2024-11-18 15:02:23.695804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:00.358 [2024-11-18 15:02:23.695843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.695903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.695933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.695985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:00.358 [2024-11-18 15:02:23.696666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.696924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.697990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.698969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:00.359 [2024-11-18 15:02:23.699659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:00.359 [2024-11-18 15:02:23.699668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:00.359 [2024-11-18 15:02:23.699675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:00.359 [2024-11-18 15:02:23.699682] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:00.359 [2024-11-18 15:02:23.699690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:00.359 [2024-11-18 15:02:23.699697] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:00.359 [2024-11-18 15:02:23.699704] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:00.359 [2024-11-18 15:02:23.699719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:00.359 [2024-11-18 15:02:23.699727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:00.359 [2024-11-18 15:02:23.699733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:00.359 [2024-11-18 15:02:23.699739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:00.360 [2024-11-18 15:02:23.699747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.360 [2024-11-18 15:02:23.699755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:00.360 [2024-11-18 15:02:23.699767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:16:00.360 [2024-11-18 15:02:23.699777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.701565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.360 [2024-11-18 15:02:23.701600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:00.360 [2024-11-18 15:02:23.701609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:16:00.360 [2024-11-18 15:02:23.701616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.701679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.360 [2024-11-18 15:02:23.701691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:00.360 [2024-11-18 15:02:23.701702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:00.360 [2024-11-18 15:02:23.701709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.707931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.707962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:00.360 [2024-11-18 15:02:23.707971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.707979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.708055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.708070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:00.360 [2024-11-18 15:02:23.708081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.708089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.708127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.708136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:00.360 [2024-11-18 15:02:23.708144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.708152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.708171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.708178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:00.360 [2024-11-18 15:02:23.708189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.708196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.719271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.719310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:00.360 [2024-11-18 15:02:23.719341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.719349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.723979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:00.360 [2024-11-18 15:02:23.724028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:00.360 [2024-11-18 15:02:23.724092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:00.360 [2024-11-18 15:02:23.724147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:00.360 [2024-11-18 15:02:23.724242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:00.360 [2024-11-18 15:02:23.724296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:00.360 [2024-11-18 15:02:23.724381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.360 [2024-11-18 15:02:23.724447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:00.360 [2024-11-18 15:02:23.724454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.360 [2024-11-18 15:02:23.724462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.360 [2024-11-18 15:02:23.724612] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.549 ms, result 0 00:16:00.928 00:16:00.928 00:16:00.928 15:02:24 -- ftl/trim.sh@72 -- # svcpid=83467 00:16:00.928 15:02:24 -- ftl/trim.sh@73 -- # waitforlisten 83467 00:16:00.928 15:02:24 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:00.928 15:02:24 -- common/autotest_common.sh@829 -- # '[' -z 83467 ']' 00:16:00.928 15:02:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.928 15:02:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.928 15:02:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.928 15:02:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.928 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:00.928 [2024-11-18 15:02:24.322387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.928 [2024-11-18 15:02:24.322490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83467 ] 00:16:00.928 [2024-11-18 15:02:24.463109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.928 [2024-11-18 15:02:24.504293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.928 [2024-11-18 15:02:24.504726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.863 15:02:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.863 15:02:25 -- common/autotest_common.sh@862 -- # return 0 00:16:01.863 15:02:25 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:01.863 [2024-11-18 15:02:25.272263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:01.863 [2024-11-18 15:02:25.272347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:01.863 [2024-11-18 15:02:25.435516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.435567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.863 [2024-11-18 15:02:25.435582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:01.863 [2024-11-18 15:02:25.435590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.437842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.437880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.863 [2024-11-18 15:02:25.437895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:16:01.863 [2024-11-18 15:02:25.437903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.438028] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.863 [2024-11-18 15:02:25.438275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.863 [2024-11-18 15:02:25.438300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.438312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.863 [2024-11-18 15:02:25.438335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:16:01.863 [2024-11-18 15:02:25.438343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.439764] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:01.863 [2024-11-18 15:02:25.442329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.442363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:01.863 [2024-11-18 15:02:25.442392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:16:01.863 [2024-11-18 15:02:25.442404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.442460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.442474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:01.863 [2024-11-18 15:02:25.442485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:01.863 [2024-11-18 15:02:25.442494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.448845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.448877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.863 [2024-11-18 15:02:25.448889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.300 ms 00:16:01.863 [2024-11-18 15:02:25.448898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.448996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.449011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.863 [2024-11-18 15:02:25.449020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:01.863 [2024-11-18 15:02:25.449030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.449057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.863 [2024-11-18 15:02:25.449067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.863 [2024-11-18 15:02:25.449075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:01.863 [2024-11-18 15:02:25.449084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.863 [2024-11-18 15:02:25.449109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:02.122 [2024-11-18 15:02:25.450767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.122 [2024-11-18 15:02:25.450791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.122 [2024-11-18 15:02:25.450802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:16:02.122 [2024-11-18 15:02:25.450809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.122 [2024-11-18 15:02:25.450854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.122 [2024-11-18 15:02:25.450863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:02.122 [2024-11-18 15:02:25.450873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:02.122 [2024-11-18 15:02:25.450880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.122 [2024-11-18 15:02:25.450903] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:02.122 [2024-11-18 15:02:25.450922] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:02.122 [2024-11-18 15:02:25.450963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:02.122 [2024-11-18 15:02:25.450979] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:02.122 [2024-11-18 15:02:25.451055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:02.122 [2024-11-18 15:02:25.451067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:02.122 [2024-11-18 15:02:25.451081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:02.122 [2024-11-18 15:02:25.451092] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:02.122 [2024-11-18 15:02:25.451109] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:02.122 [2024-11-18 15:02:25.451118] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:02.122 [2024-11-18 15:02:25.451129] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:02.122 [2024-11-18 15:02:25.451137] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:02.122 [2024-11-18 15:02:25.451146] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:02.122 [2024-11-18 15:02:25.451157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.122 [2024-11-18 15:02:25.451165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:02.122 [2024-11-18 15:02:25.451174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:16:02.122 [2024-11-18 15:02:25.451183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.122 [2024-11-18 15:02:25.451252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.122 [2024-11-18 15:02:25.451262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:02.122 [2024-11-18 15:02:25.451269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:02.122 [2024-11-18 15:02:25.451280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.122 [2024-11-18 15:02:25.451375] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:02.122 [2024-11-18 15:02:25.451389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:02.122 [2024-11-18 15:02:25.451398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:02.122 [2024-11-18 15:02:25.451409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:02.122 [2024-11-18 15:02:25.451418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:02.122 [2024-11-18 15:02:25.451428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:02.122 [2024-11-18 15:02:25.451436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:02.123 [2024-11-18 15:02:25.451454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:02.123 [2024-11-18 15:02:25.451472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:02.123 [2024-11-18 15:02:25.451481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:02.123 [2024-11-18 15:02:25.451489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:02.123 [2024-11-18 15:02:25.451497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:02.123 [2024-11-18 15:02:25.451506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:02.123 [2024-11-18 15:02:25.451515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:02.123 [2024-11-18 15:02:25.451532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:02.123 [2024-11-18 15:02:25.451539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:02.123 [2024-11-18 15:02:25.451557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:02.123 [2024-11-18 15:02:25.451568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:02.123 [2024-11-18 15:02:25.451590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:02.123 [2024-11-18 15:02:25.451617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:02.123 [2024-11-18 15:02:25.451644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:02.123 [2024-11-18 15:02:25.451668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:02.123 [2024-11-18 15:02:25.451694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:02.123 [2024-11-18 15:02:25.451710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:02.123 [2024-11-18 15:02:25.451718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:02.123 [2024-11-18 15:02:25.451727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:02.123 [2024-11-18 15:02:25.451734] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:02.123 [2024-11-18 15:02:25.451745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:02.123 [2024-11-18 15:02:25.451753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:02.123 [2024-11-18 15:02:25.451775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:02.123 [2024-11-18 15:02:25.451785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:02.123 [2024-11-18 15:02:25.451793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:02.123 [2024-11-18 15:02:25.451801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:02.123 [2024-11-18 15:02:25.451807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:02.123 [2024-11-18 15:02:25.451815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:02.123 [2024-11-18 15:02:25.451823] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:02.123 [2024-11-18 15:02:25.451840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:02.123 [2024-11-18 15:02:25.451850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:02.123 [2024-11-18 15:02:25.451860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:02.123 [2024-11-18 15:02:25.451868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:02.123 [2024-11-18 15:02:25.451877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:02.123 [2024-11-18 15:02:25.451884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:02.123 [2024-11-18 15:02:25.451893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:02.123 [2024-11-18 15:02:25.451901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:02.123 [2024-11-18 15:02:25.451911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:02.123 [2024-11-18 15:02:25.451918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:02.123 [2024-11-18 15:02:25.451928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:02.123 [2024-11-18 15:02:25.451935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:02.123 [2024-11-18 15:02:25.451945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:02.123 [2024-11-18 15:02:25.451952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:02.123 [2024-11-18 15:02:25.451961] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:02.123 [2024-11-18 15:02:25.451969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:02.123 [2024-11-18 15:02:25.451980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:02.123 [2024-11-18 15:02:25.451987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:02.123 [2024-11-18 15:02:25.451996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:02.123 [2024-11-18 15:02:25.452004] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:02.123 [2024-11-18 15:02:25.452013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.452021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:02.123 [2024-11-18 15:02:25.452031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:16:02.123 [2024-11-18 15:02:25.452038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.460128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.460260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.123 [2024-11-18 15:02:25.460401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.040 ms 00:16:02.123 [2024-11-18 15:02:25.460471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.460606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.460640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:02.123 [2024-11-18 15:02:25.460714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:02.123 [2024-11-18 15:02:25.460742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.472171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.472284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.123 [2024-11-18 15:02:25.472361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.380 ms 00:16:02.123 [2024-11-18 15:02:25.472671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.472837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.472878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.123 [2024-11-18 15:02:25.472986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:02.123 [2024-11-18 15:02:25.473011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.473457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.473548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.123 [2024-11-18 15:02:25.473606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:16:02.123 [2024-11-18 15:02:25.473629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.473766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.473793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.123 [2024-11-18 15:02:25.473844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:02.123 [2024-11-18 15:02:25.473867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.480668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.123 [2024-11-18 15:02:25.480768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.123 [2024-11-18 15:02:25.480822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:16:02.123 [2024-11-18 15:02:25.480847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.123 [2024-11-18 15:02:25.483573] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:02.123 [2024-11-18 15:02:25.483689] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:02.123 [2024-11-18 15:02:25.483753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.483806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:02.124 [2024-11-18 15:02:25.483832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:16:02.124 [2024-11-18 15:02:25.483870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.498535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.498642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:02.124 [2024-11-18 15:02:25.498701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.596 ms 00:16:02.124 [2024-11-18 15:02:25.498778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.500625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.500721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:02.124 [2024-11-18 15:02:25.500778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:16:02.124 [2024-11-18 15:02:25.500800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.502143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.502239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:02.124 [2024-11-18 15:02:25.502292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:16:02.124 [2024-11-18 15:02:25.502325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.502549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.502623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:02.124 [2024-11-18 15:02:25.502690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:02.124 [2024-11-18 15:02:25.502713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.522653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.522783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:02.124 [2024-11-18 15:02:25.522840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.901 ms 00:16:02.124 [2024-11-18 15:02:25.522854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.530336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:02.124 [2024-11-18 15:02:25.546672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.546716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:02.124 [2024-11-18 15:02:25.546728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.705 ms 00:16:02.124 [2024-11-18 15:02:25.546743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.546820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.546832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:02.124 [2024-11-18 15:02:25.546841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:02.124 [2024-11-18 15:02:25.546850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.546904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.546919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:02.124 [2024-11-18 15:02:25.546930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:02.124 [2024-11-18 15:02:25.546939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.548188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.548222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:02.124 [2024-11-18 15:02:25.548232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:16:02.124 [2024-11-18 15:02:25.548241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.548272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.548285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:02.124 [2024-11-18 15:02:25.548293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:02.124 [2024-11-18 15:02:25.548302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.548355] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:02.124 [2024-11-18 15:02:25.548367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.548375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:02.124 [2024-11-18 15:02:25.548387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:02.124 [2024-11-18 15:02:25.548395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.552440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.552476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:02.124 [2024-11-18 15:02:25.552487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:16:02.124 [2024-11-18 15:02:25.552495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.552571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.124 [2024-11-18 15:02:25.552585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:02.124 [2024-11-18 15:02:25.552598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:02.124 [2024-11-18 15:02:25.552606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.124 [2024-11-18 15:02:25.553546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:02.124 [2024-11-18 15:02:25.554692] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.717 ms, result 0 00:16:02.124 [2024-11-18 15:02:25.555600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:02.124 Some configs were skipped because the RPC state that can call them passed over. 00:16:02.124 15:02:25 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:02.383 [2024-11-18 15:02:25.768714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.383 [2024-11-18 15:02:25.768916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:02.383 [2024-11-18 15:02:25.768974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:16:02.383 [2024-11-18 15:02:25.769001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.383 [2024-11-18 15:02:25.769143] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.278 ms, result 0 00:16:02.383 true 00:16:02.383 15:02:25 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:02.383 [2024-11-18 15:02:25.923481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.383 [2024-11-18 15:02:25.923625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:02.383 [2024-11-18 15:02:25.923682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:16:02.383 [2024-11-18 15:02:25.923705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.383 [2024-11-18 15:02:25.923759] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.682 ms, result 0 00:16:02.383 true 00:16:02.383 15:02:25 -- ftl/trim.sh@81 -- # killprocess 83467 00:16:02.383 15:02:25 -- common/autotest_common.sh@936 -- # '[' -z 83467 ']' 00:16:02.383 15:02:25 -- common/autotest_common.sh@940 -- # kill -0 83467 00:16:02.383 15:02:25 -- common/autotest_common.sh@941 -- # uname 00:16:02.383 15:02:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.383 15:02:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83467 00:16:02.383 killing process with pid 83467 00:16:02.383 15:02:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:02.383 15:02:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:02.383 15:02:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83467' 00:16:02.383 15:02:25 -- common/autotest_common.sh@955 -- # kill 83467 00:16:02.383 15:02:25 -- common/autotest_common.sh@960 -- # wait 83467 00:16:02.643 [2024-11-18 15:02:26.089760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.089824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:02.643 [2024-11-18 15:02:26.089838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:02.643 [2024-11-18 15:02:26.089849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.089873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:02.643 [2024-11-18 15:02:26.090439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.090463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:02.643 [2024-11-18 15:02:26.090474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:02.643 [2024-11-18 15:02:26.090486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.090826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.090844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:02.643 [2024-11-18 15:02:26.090860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:16:02.643 [2024-11-18 15:02:26.090872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.095274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.095304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:02.643 [2024-11-18 15:02:26.095326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.376 ms 00:16:02.643 [2024-11-18 15:02:26.095335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.102442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.102470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:02.643 [2024-11-18 15:02:26.102484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.071 ms 00:16:02.643 [2024-11-18 15:02:26.102494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.104228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.104259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:02.643 [2024-11-18 15:02:26.104270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:16:02.643 [2024-11-18 15:02:26.104278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.643 [2024-11-18 15:02:26.108178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.643 [2024-11-18 15:02:26.108211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:02.643 [2024-11-18 15:02:26.108223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:16:02.644 [2024-11-18 15:02:26.108230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.108374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.644 [2024-11-18 15:02:26.108385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:02.644 [2024-11-18 15:02:26.108395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:16:02.644 [2024-11-18 15:02:26.108405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.110430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.644 [2024-11-18 15:02:26.110458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:02.644 [2024-11-18 15:02:26.110473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:16:02.644 [2024-11-18 15:02:26.110481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.112007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.644 [2024-11-18 15:02:26.112037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:02.644 [2024-11-18 15:02:26.112047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:16:02.644 [2024-11-18 15:02:26.112054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.113296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.644 [2024-11-18 15:02:26.113340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:02.644 [2024-11-18 15:02:26.113351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:16:02.644 [2024-11-18 15:02:26.113357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.114547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.644 [2024-11-18 15:02:26.114575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:02.644 [2024-11-18 15:02:26.114585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:16:02.644 [2024-11-18 15:02:26.114592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.644 [2024-11-18 15:02:26.114648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:02.644 [2024-11-18 15:02:26.114669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.114994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:02.644 [2024-11-18 15:02:26.115271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:02.645 [2024-11-18 15:02:26.115545] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:02.645 [2024-11-18 15:02:26.115554] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:02.645 [2024-11-18 15:02:26.115563] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:02.645 [2024-11-18 15:02:26.115572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:02.645 [2024-11-18 15:02:26.115580] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:02.645 [2024-11-18 15:02:26.115589] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:02.645 [2024-11-18 15:02:26.115596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:02.645 [2024-11-18 15:02:26.115608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:02.645 [2024-11-18 15:02:26.115618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:02.645 [2024-11-18 15:02:26.115626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:02.645 [2024-11-18 15:02:26.115632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:02.645 [2024-11-18 15:02:26.115642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.645 [2024-11-18 15:02:26.115649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:02.645 [2024-11-18 15:02:26.115661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:16:02.645 [2024-11-18 15:02:26.115667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.117486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.645 [2024-11-18 15:02:26.117598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:02.645 [2024-11-18 15:02:26.117615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:16:02.645 [2024-11-18 15:02:26.117624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.117692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.645 [2024-11-18 15:02:26.117701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:02.645 [2024-11-18 15:02:26.117711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:02.645 [2024-11-18 15:02:26.117722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.124179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.124213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.645 [2024-11-18 15:02:26.124225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.124235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.124334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.124345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.645 [2024-11-18 15:02:26.124364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.124372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.124415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.124426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.645 [2024-11-18 15:02:26.124436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.124444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.124469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.124478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.645 [2024-11-18 15:02:26.124487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.124495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.136778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.136822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.645 [2024-11-18 15:02:26.136834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.136846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.645 [2024-11-18 15:02:26.141629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.141637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.645 [2024-11-18 15:02:26.141698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.141706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:02.645 [2024-11-18 15:02:26.141761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.141768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:02.645 [2024-11-18 15:02:26.141861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.141869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:02.645 [2024-11-18 15:02:26.141927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.141934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.141983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.141992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:02.645 [2024-11-18 15:02:26.142001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.142009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.142063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.645 [2024-11-18 15:02:26.142075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:02.645 [2024-11-18 15:02:26.142084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.645 [2024-11-18 15:02:26.142091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.645 [2024-11-18 15:02:26.142232] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.454 ms, result 0 00:16:02.904 15:02:26 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:02.905 15:02:26 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.905 [2024-11-18 15:02:26.411754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.905 [2024-11-18 15:02:26.411871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83503 ] 00:16:03.163 [2024-11-18 15:02:26.560527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.163 [2024-11-18 15:02:26.601366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.163 [2024-11-18 15:02:26.702807] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:03.163 [2024-11-18 15:02:26.702886] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:03.422 [2024-11-18 15:02:26.850441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.850493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:03.422 [2024-11-18 15:02:26.850508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:03.422 [2024-11-18 15:02:26.850518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.852798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.853004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:03.422 [2024-11-18 15:02:26.853021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:16:03.422 [2024-11-18 15:02:26.853030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.853156] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:03.422 [2024-11-18 15:02:26.853410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:03.422 [2024-11-18 15:02:26.853425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.853434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:03.422 [2024-11-18 15:02:26.853443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:03.422 [2024-11-18 15:02:26.853451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.854863] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:03.422 [2024-11-18 15:02:26.857552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.857593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:03.422 [2024-11-18 15:02:26.857607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:16:03.422 [2024-11-18 15:02:26.857615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.857673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.857683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:03.422 [2024-11-18 15:02:26.857694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:03.422 [2024-11-18 15:02:26.857704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.864177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.864371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:03.422 [2024-11-18 15:02:26.864393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.431 ms 00:16:03.422 [2024-11-18 15:02:26.864402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.864500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.864510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:03.422 [2024-11-18 15:02:26.864522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:03.422 [2024-11-18 15:02:26.864531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.864558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.864566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:03.422 [2024-11-18 15:02:26.864574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:03.422 [2024-11-18 15:02:26.864583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.864610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:03.422 [2024-11-18 15:02:26.866250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.866282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:03.422 [2024-11-18 15:02:26.866291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:16:03.422 [2024-11-18 15:02:26.866299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.866353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.866366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:03.422 [2024-11-18 15:02:26.866375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:03.422 [2024-11-18 15:02:26.866383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.422 [2024-11-18 15:02:26.866402] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:03.422 [2024-11-18 15:02:26.866420] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:03.422 [2024-11-18 15:02:26.866457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:03.422 [2024-11-18 15:02:26.866473] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:03.422 [2024-11-18 15:02:26.866549] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:03.422 [2024-11-18 15:02:26.866560] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:03.422 [2024-11-18 15:02:26.866570] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:03.422 [2024-11-18 15:02:26.866580] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:03.422 [2024-11-18 15:02:26.866590] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:03.422 [2024-11-18 15:02:26.866597] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:03.422 [2024-11-18 15:02:26.866605] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:03.422 [2024-11-18 15:02:26.866627] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:03.422 [2024-11-18 15:02:26.866634] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:03.422 [2024-11-18 15:02:26.866643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.422 [2024-11-18 15:02:26.866650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:03.423 [2024-11-18 15:02:26.866658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:16:03.423 [2024-11-18 15:02:26.866665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.866735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.866748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:03.423 [2024-11-18 15:02:26.866755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:03.423 [2024-11-18 15:02:26.866765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.866843] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:03.423 [2024-11-18 15:02:26.866853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:03.423 [2024-11-18 15:02:26.866861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.423 [2024-11-18 15:02:26.866870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.423 [2024-11-18 15:02:26.866877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:03.423 [2024-11-18 15:02:26.866884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:03.423 [2024-11-18 15:02:26.866890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:03.423 [2024-11-18 15:02:26.866900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:03.423 [2024-11-18 15:02:26.866908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:03.423 [2024-11-18 15:02:26.866916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.423 [2024-11-18 15:02:26.866923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:03.423 [2024-11-18 15:02:26.866938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:03.423 [2024-11-18 15:02:26.866945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.423 [2024-11-18 15:02:26.866952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:03.423 [2024-11-18 15:02:26.866960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:03.423 [2024-11-18 15:02:26.866970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.423 [2024-11-18 15:02:26.866978] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:03.423 [2024-11-18 15:02:26.866985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:03.423 [2024-11-18 15:02:26.866993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:03.423 [2024-11-18 15:02:26.867009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:03.423 [2024-11-18 15:02:26.867017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:03.423 [2024-11-18 15:02:26.867034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:03.423 [2024-11-18 15:02:26.867056] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:03.423 [2024-11-18 15:02:26.867079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:03.423 [2024-11-18 15:02:26.867106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:03.423 [2024-11-18 15:02:26.867127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.423 [2024-11-18 15:02:26.867143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:03.423 [2024-11-18 15:02:26.867150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:03.423 [2024-11-18 15:02:26.867157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.423 [2024-11-18 15:02:26.867164] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:03.423 [2024-11-18 15:02:26.867173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:03.423 [2024-11-18 15:02:26.867181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.423 [2024-11-18 15:02:26.867198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:03.423 [2024-11-18 15:02:26.867206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:03.423 [2024-11-18 15:02:26.867214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:03.423 [2024-11-18 15:02:26.867223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:03.423 [2024-11-18 15:02:26.867230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:03.423 [2024-11-18 15:02:26.867238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:03.423 [2024-11-18 15:02:26.867247] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:03.423 [2024-11-18 15:02:26.867257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.423 [2024-11-18 15:02:26.867269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:03.423 [2024-11-18 15:02:26.867277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:03.423 [2024-11-18 15:02:26.867285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:03.423 [2024-11-18 15:02:26.867292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:03.423 [2024-11-18 15:02:26.867300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:03.423 [2024-11-18 15:02:26.867308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:03.423 [2024-11-18 15:02:26.867327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:03.423 [2024-11-18 15:02:26.867334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:03.423 [2024-11-18 15:02:26.867342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:03.423 [2024-11-18 15:02:26.867349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:03.423 [2024-11-18 15:02:26.867356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:03.423 [2024-11-18 15:02:26.867365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:03.423 [2024-11-18 15:02:26.867373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:03.423 [2024-11-18 15:02:26.867380] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:03.423 [2024-11-18 15:02:26.867388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.423 [2024-11-18 15:02:26.867395] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:03.423 [2024-11-18 15:02:26.867404] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:03.423 [2024-11-18 15:02:26.867411] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:03.423 [2024-11-18 15:02:26.867420] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:03.423 [2024-11-18 15:02:26.867428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.867436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:03.423 [2024-11-18 15:02:26.867444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:16:03.423 [2024-11-18 15:02:26.867451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.875362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.875515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:03.423 [2024-11-18 15:02:26.875605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.866 ms 00:16:03.423 [2024-11-18 15:02:26.875644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.875777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.875857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:03.423 [2024-11-18 15:02:26.875895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:03.423 [2024-11-18 15:02:26.875915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.894987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.895160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:03.423 [2024-11-18 15:02:26.895238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.037 ms 00:16:03.423 [2024-11-18 15:02:26.895269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.895386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.423 [2024-11-18 15:02:26.895424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:03.423 [2024-11-18 15:02:26.895504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:03.423 [2024-11-18 15:02:26.895533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.423 [2024-11-18 15:02:26.895983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.896096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:03.424 [2024-11-18 15:02:26.896159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:16:03.424 [2024-11-18 15:02:26.896193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.896380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.896421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:03.424 [2024-11-18 15:02:26.896483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:03.424 [2024-11-18 15:02:26.896518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.903573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.903678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:03.424 [2024-11-18 15:02:26.903728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:16:03.424 [2024-11-18 15:02:26.903750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.906574] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:03.424 [2024-11-18 15:02:26.906705] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:03.424 [2024-11-18 15:02:26.906764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.906785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:03.424 [2024-11-18 15:02:26.906814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:16:03.424 [2024-11-18 15:02:26.906833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.921443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.921568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:03.424 [2024-11-18 15:02:26.921625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:16:03.424 [2024-11-18 15:02:26.921647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.923425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.923522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:03.424 [2024-11-18 15:02:26.923572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:16:03.424 [2024-11-18 15:02:26.923593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.925075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.925174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:03.424 [2024-11-18 15:02:26.925221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:16:03.424 [2024-11-18 15:02:26.925242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.925461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.925529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:03.424 [2024-11-18 15:02:26.925573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:03.424 [2024-11-18 15:02:26.925594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.945762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.945913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:03.424 [2024-11-18 15:02:26.945966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.131 ms 00:16:03.424 [2024-11-18 15:02:26.945977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.953377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:03.424 [2024-11-18 15:02:26.970186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.970224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:03.424 [2024-11-18 15:02:26.970236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.129 ms 00:16:03.424 [2024-11-18 15:02:26.970244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.970345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.970358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:03.424 [2024-11-18 15:02:26.970367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:03.424 [2024-11-18 15:02:26.970380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.970435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.970449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:03.424 [2024-11-18 15:02:26.970457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:03.424 [2024-11-18 15:02:26.970464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.971720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.971751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:03.424 [2024-11-18 15:02:26.971767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:16:03.424 [2024-11-18 15:02:26.971774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.971809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.971818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:03.424 [2024-11-18 15:02:26.971830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:03.424 [2024-11-18 15:02:26.971841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.971874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:03.424 [2024-11-18 15:02:26.971884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.971892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:03.424 [2024-11-18 15:02:26.971904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:03.424 [2024-11-18 15:02:26.971913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.975740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.975868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:03.424 [2024-11-18 15:02:26.975883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 00:16:03.424 [2024-11-18 15:02:26.975891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.975970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.424 [2024-11-18 15:02:26.975980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:03.424 [2024-11-18 15:02:26.975989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:03.424 [2024-11-18 15:02:26.975996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.424 [2024-11-18 15:02:26.977105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:03.424 [2024-11-18 15:02:26.978154] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.375 ms, result 0 00:16:03.424 [2024-11-18 15:02:26.979085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:03.424 [2024-11-18 15:02:26.987607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:04.796  [2024-11-18T15:02:29.321Z] Copying: 45/256 [MB] (45 MBps) [2024-11-18T15:02:30.325Z] Copying: 88/256 [MB] (42 MBps) [2024-11-18T15:02:31.260Z] Copying: 129/256 [MB] (41 MBps) [2024-11-18T15:02:32.194Z] Copying: 173/256 [MB] (43 MBps) [2024-11-18T15:02:33.131Z] Copying: 216/256 [MB] (42 MBps) [2024-11-18T15:02:33.131Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-18 15:02:32.936640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:09.541 [2024-11-18 15:02:32.938139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.938272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:09.541 [2024-11-18 15:02:32.938366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.541 [2024-11-18 15:02:32.938431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.938474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:09.541 [2024-11-18 15:02:32.939133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.939234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:09.541 [2024-11-18 15:02:32.939291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:16:09.541 [2024-11-18 15:02:32.939313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.939651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.939731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:09.541 [2024-11-18 15:02:32.939783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:16:09.541 [2024-11-18 15:02:32.939806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.943526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.943606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:09.541 [2024-11-18 15:02:32.943695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:16:09.541 [2024-11-18 15:02:32.943719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.950700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.950816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:09.541 [2024-11-18 15:02:32.950893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:16:09.541 [2024-11-18 15:02:32.950915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.952592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.952693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:09.541 [2024-11-18 15:02:32.952741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:16:09.541 [2024-11-18 15:02:32.952763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.956581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.956688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:09.541 [2024-11-18 15:02:32.956739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:16:09.541 [2024-11-18 15:02:32.956777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.956935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.956986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:09.541 [2024-11-18 15:02:32.957048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:09.541 [2024-11-18 15:02:32.957070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.959175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.959275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:09.541 [2024-11-18 15:02:32.959339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:16:09.541 [2024-11-18 15:02:32.959362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.960823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.960921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:09.541 [2024-11-18 15:02:32.960969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:16:09.541 [2024-11-18 15:02:32.960990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.962836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.963140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:09.541 [2024-11-18 15:02:32.963307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:16:09.541 [2024-11-18 15:02:32.963360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.965604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.541 [2024-11-18 15:02:32.965707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:09.541 [2024-11-18 15:02:32.965739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:16:09.541 [2024-11-18 15:02:32.965760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.541 [2024-11-18 15:02:32.965823] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:09.541 [2024-11-18 15:02:32.965861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.965891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.965914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.965937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.965961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.965984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.966962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:09.541 [2024-11-18 15:02:32.967434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.967993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:09.542 [2024-11-18 15:02:32.968215] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:09.542 [2024-11-18 15:02:32.968227] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:09.542 [2024-11-18 15:02:32.968239] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:09.542 [2024-11-18 15:02:32.968251] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:09.542 [2024-11-18 15:02:32.968262] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:09.542 [2024-11-18 15:02:32.968273] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:09.542 [2024-11-18 15:02:32.968299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:09.542 [2024-11-18 15:02:32.968311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:09.542 [2024-11-18 15:02:32.968342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:09.542 [2024-11-18 15:02:32.968352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:09.542 [2024-11-18 15:02:32.968363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:09.542 [2024-11-18 15:02:32.968374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.542 [2024-11-18 15:02:32.968386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:09.542 [2024-11-18 15:02:32.968404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:16:09.542 [2024-11-18 15:02:32.968416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.542 [2024-11-18 15:02:32.970517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.542 [2024-11-18 15:02:32.970549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:09.542 [2024-11-18 15:02:32.970572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:16:09.542 [2024-11-18 15:02:32.970585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.542 [2024-11-18 15:02:32.970681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.542 [2024-11-18 15:02:32.970697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:09.542 [2024-11-18 15:02:32.970716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:09.542 [2024-11-18 15:02:32.970730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.542 [2024-11-18 15:02:32.978226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.542 [2024-11-18 15:02:32.978421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:09.542 [2024-11-18 15:02:32.978443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.542 [2024-11-18 15:02:32.978455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.542 [2024-11-18 15:02:32.978578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.978593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:09.543 [2024-11-18 15:02:32.978606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.978646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.978712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.978727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:09.543 [2024-11-18 15:02:32.978740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.978751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.978777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.978793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:09.543 [2024-11-18 15:02:32.978805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.978816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.989803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.989946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:09.543 [2024-11-18 15:02:32.989961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.989969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.994644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:09.543 [2024-11-18 15:02:32.994653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.994661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.994707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:09.543 [2024-11-18 15:02:32.994715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.994722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.994761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:09.543 [2024-11-18 15:02:32.994776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.994785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.994868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:09.543 [2024-11-18 15:02:32.994876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.994884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.994930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:09.543 [2024-11-18 15:02:32.994943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.994953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.994996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.995006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:09.543 [2024-11-18 15:02:32.995014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.995021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.995072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.543 [2024-11-18 15:02:32.995083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:09.543 [2024-11-18 15:02:32.995091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.543 [2024-11-18 15:02:32.995101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.543 [2024-11-18 15:02:32.995248] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.082 ms, result 0 00:16:09.802 00:16:09.802 00:16:09.802 15:02:33 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:09.802 15:02:33 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:10.368 15:02:33 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:10.368 [2024-11-18 15:02:33.830386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:10.368 [2024-11-18 15:02:33.830514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83586 ] 00:16:10.626 [2024-11-18 15:02:33.979247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.626 [2024-11-18 15:02:34.019468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.626 [2024-11-18 15:02:34.118477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:10.626 [2024-11-18 15:02:34.118560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:10.888 [2024-11-18 15:02:34.270265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.270333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:10.888 [2024-11-18 15:02:34.270351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:10.888 [2024-11-18 15:02:34.270363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.272632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.272796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:10.888 [2024-11-18 15:02:34.272814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:16:10.888 [2024-11-18 15:02:34.272822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.272897] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:10.888 [2024-11-18 15:02:34.273137] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:10.888 [2024-11-18 15:02:34.273151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.273160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:10.888 [2024-11-18 15:02:34.273168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:16:10.888 [2024-11-18 15:02:34.273176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.274556] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:10.888 [2024-11-18 15:02:34.277129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.277162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:10.888 [2024-11-18 15:02:34.277173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:16:10.888 [2024-11-18 15:02:34.277180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.277243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.277254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:10.888 [2024-11-18 15:02:34.277262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:10.888 [2024-11-18 15:02:34.277275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.283500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.283645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:10.888 [2024-11-18 15:02:34.283660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:16:10.888 [2024-11-18 15:02:34.283668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.283768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.283778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:10.888 [2024-11-18 15:02:34.283789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:10.888 [2024-11-18 15:02:34.283796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.283821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.283828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:10.888 [2024-11-18 15:02:34.283836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:10.888 [2024-11-18 15:02:34.283846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.283868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:10.888 [2024-11-18 15:02:34.285483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.285515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:10.888 [2024-11-18 15:02:34.285524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:16:10.888 [2024-11-18 15:02:34.285532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.285569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.285583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:10.888 [2024-11-18 15:02:34.285592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:10.888 [2024-11-18 15:02:34.285599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.888 [2024-11-18 15:02:34.285617] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:10.888 [2024-11-18 15:02:34.285636] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:10.888 [2024-11-18 15:02:34.285672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:10.888 [2024-11-18 15:02:34.285690] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:10.888 [2024-11-18 15:02:34.285768] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:10.888 [2024-11-18 15:02:34.285779] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:10.888 [2024-11-18 15:02:34.285789] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:10.888 [2024-11-18 15:02:34.285799] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:10.888 [2024-11-18 15:02:34.285808] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:10.888 [2024-11-18 15:02:34.285816] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:10.888 [2024-11-18 15:02:34.285823] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:10.888 [2024-11-18 15:02:34.285832] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:10.888 [2024-11-18 15:02:34.285839] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:10.888 [2024-11-18 15:02:34.285846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.888 [2024-11-18 15:02:34.285854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:10.889 [2024-11-18 15:02:34.285862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:16:10.889 [2024-11-18 15:02:34.285869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.285933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.285942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:10.889 [2024-11-18 15:02:34.285949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:10.889 [2024-11-18 15:02:34.285958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.286038] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:10.889 [2024-11-18 15:02:34.286048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:10.889 [2024-11-18 15:02:34.286059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:10.889 [2024-11-18 15:02:34.286085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:10.889 [2024-11-18 15:02:34.286106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:10.889 [2024-11-18 15:02:34.286120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:10.889 [2024-11-18 15:02:34.286133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:10.889 [2024-11-18 15:02:34.286140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:10.889 [2024-11-18 15:02:34.286148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:10.889 [2024-11-18 15:02:34.286155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:10.889 [2024-11-18 15:02:34.286165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:10.889 [2024-11-18 15:02:34.286181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:10.889 [2024-11-18 15:02:34.286191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:10.889 [2024-11-18 15:02:34.286208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:10.889 [2024-11-18 15:02:34.286216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:10.889 [2024-11-18 15:02:34.286231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:10.889 [2024-11-18 15:02:34.286254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:10.889 [2024-11-18 15:02:34.286275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:10.889 [2024-11-18 15:02:34.286303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:10.889 [2024-11-18 15:02:34.286342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:10.889 [2024-11-18 15:02:34.286357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:10.889 [2024-11-18 15:02:34.286364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:10.889 [2024-11-18 15:02:34.286371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:10.889 [2024-11-18 15:02:34.286378] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:10.889 [2024-11-18 15:02:34.286386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:10.889 [2024-11-18 15:02:34.286395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.889 [2024-11-18 15:02:34.286415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:10.889 [2024-11-18 15:02:34.286423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:10.889 [2024-11-18 15:02:34.286431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:10.889 [2024-11-18 15:02:34.286441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:10.889 [2024-11-18 15:02:34.286450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:10.889 [2024-11-18 15:02:34.286457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:10.889 [2024-11-18 15:02:34.286466] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:10.889 [2024-11-18 15:02:34.286476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:10.889 [2024-11-18 15:02:34.286489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:10.889 [2024-11-18 15:02:34.286497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:10.889 [2024-11-18 15:02:34.286505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:10.889 [2024-11-18 15:02:34.286512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:10.889 [2024-11-18 15:02:34.286518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:10.889 [2024-11-18 15:02:34.286525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:10.889 [2024-11-18 15:02:34.286533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:10.889 [2024-11-18 15:02:34.286540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:10.889 [2024-11-18 15:02:34.286546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:10.889 [2024-11-18 15:02:34.286553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:10.889 [2024-11-18 15:02:34.286560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:10.889 [2024-11-18 15:02:34.286569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:10.889 [2024-11-18 15:02:34.286577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:10.889 [2024-11-18 15:02:34.286584] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:10.889 [2024-11-18 15:02:34.286591] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:10.889 [2024-11-18 15:02:34.286600] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:10.889 [2024-11-18 15:02:34.286607] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:10.889 [2024-11-18 15:02:34.286614] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:10.889 [2024-11-18 15:02:34.286630] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:10.889 [2024-11-18 15:02:34.286638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.286645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:10.889 [2024-11-18 15:02:34.286654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:16:10.889 [2024-11-18 15:02:34.286661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.294528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.294653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.889 [2024-11-18 15:02:34.294715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.824 ms 00:16:10.889 [2024-11-18 15:02:34.294742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.294868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.295012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:10.889 [2024-11-18 15:02:34.295036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:10.889 [2024-11-18 15:02:34.295054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.317709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.317885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.889 [2024-11-18 15:02:34.317976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.617 ms 00:16:10.889 [2024-11-18 15:02:34.318012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.318130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.318172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.889 [2024-11-18 15:02:34.318371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:10.889 [2024-11-18 15:02:34.318409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.889 [2024-11-18 15:02:34.318925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.889 [2024-11-18 15:02:34.319039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.889 [2024-11-18 15:02:34.319104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:16:10.889 [2024-11-18 15:02:34.319181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.319339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.319451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.890 [2024-11-18 15:02:34.319478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:10.890 [2024-11-18 15:02:34.319498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.326342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.326437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.890 [2024-11-18 15:02:34.326486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:16:10.890 [2024-11-18 15:02:34.326509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.329508] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:10.890 [2024-11-18 15:02:34.329620] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:10.890 [2024-11-18 15:02:34.329679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.329700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:10.890 [2024-11-18 15:02:34.330042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:16:10.890 [2024-11-18 15:02:34.330085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.344892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.344925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:10.890 [2024-11-18 15:02:34.344936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.747 ms 00:16:10.890 [2024-11-18 15:02:34.344945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.346841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.346865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:10.890 [2024-11-18 15:02:34.346873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:16:10.890 [2024-11-18 15:02:34.346881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.348488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.348529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:10.890 [2024-11-18 15:02:34.348550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:16:10.890 [2024-11-18 15:02:34.348568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.349165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.349278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:10.890 [2024-11-18 15:02:34.349347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:16:10.890 [2024-11-18 15:02:34.349378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.371557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.371675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:10.890 [2024-11-18 15:02:34.371726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.141 ms 00:16:10.890 [2024-11-18 15:02:34.371748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.379395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:10.890 [2024-11-18 15:02:34.395720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.395841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:10.890 [2024-11-18 15:02:34.395893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:16:10.890 [2024-11-18 15:02:34.395916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.396009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.396038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:10.890 [2024-11-18 15:02:34.396059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:10.890 [2024-11-18 15:02:34.396078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.396141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.396165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:10.890 [2024-11-18 15:02:34.396237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:10.890 [2024-11-18 15:02:34.396261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.397535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.397629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:10.890 [2024-11-18 15:02:34.397680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:16:10.890 [2024-11-18 15:02:34.397702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.397749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.397772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:10.890 [2024-11-18 15:02:34.397795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:10.890 [2024-11-18 15:02:34.397813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.397860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:10.890 [2024-11-18 15:02:34.397884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.397907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:10.890 [2024-11-18 15:02:34.397964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:10.890 [2024-11-18 15:02:34.397989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.402277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.402400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:10.890 [2024-11-18 15:02:34.402451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:16:10.890 [2024-11-18 15:02:34.402476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.402556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.890 [2024-11-18 15:02:34.402589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:10.890 [2024-11-18 15:02:34.402609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:10.890 [2024-11-18 15:02:34.402647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.890 [2024-11-18 15:02:34.403515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:10.890 [2024-11-18 15:02:34.404580] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.950 ms, result 0 00:16:10.890 [2024-11-18 15:02:34.405855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:10.890 [2024-11-18 15:02:34.414042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:11.153  [2024-11-18T15:02:34.743Z] Copying: 4096/4096 [kB] (average 35 MBps)[2024-11-18 15:02:34.528559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:11.153 [2024-11-18 15:02:34.529182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.529209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:11.153 [2024-11-18 15:02:34.529222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:11.153 [2024-11-18 15:02:34.529231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.529251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:11.153 [2024-11-18 15:02:34.529812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.529839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:11.153 [2024-11-18 15:02:34.529849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:16:11.153 [2024-11-18 15:02:34.529856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.531695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.531799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:11.153 [2024-11-18 15:02:34.531814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:16:11.153 [2024-11-18 15:02:34.531822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.535862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.535894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:11.153 [2024-11-18 15:02:34.535903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:16:11.153 [2024-11-18 15:02:34.535911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.542764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.542869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:11.153 [2024-11-18 15:02:34.542887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:16:11.153 [2024-11-18 15:02:34.542895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.545127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.545160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:11.153 [2024-11-18 15:02:34.545168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:16:11.153 [2024-11-18 15:02:34.545176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.549211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.549243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:11.153 [2024-11-18 15:02:34.549252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:16:11.153 [2024-11-18 15:02:34.549269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.549400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.549415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:11.153 [2024-11-18 15:02:34.549423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:11.153 [2024-11-18 15:02:34.549431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.551611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.551640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:11.153 [2024-11-18 15:02:34.551649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:16:11.153 [2024-11-18 15:02:34.551656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.553303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.553344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:11.153 [2024-11-18 15:02:34.553353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:16:11.153 [2024-11-18 15:02:34.553360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.554562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.554590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:11.153 [2024-11-18 15:02:34.554598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:16:11.153 [2024-11-18 15:02:34.554605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.555983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-11-18 15:02:34.556013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:11.153 [2024-11-18 15:02:34.556021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:16:11.153 [2024-11-18 15:02:34.556027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-11-18 15:02:34.556055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:11.153 [2024-11-18 15:02:34.556069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:11.153 [2024-11-18 15:02:34.556393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:11.154 [2024-11-18 15:02:34.556845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:11.154 [2024-11-18 15:02:34.556853] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:11.154 [2024-11-18 15:02:34.556860] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:11.154 [2024-11-18 15:02:34.556874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:11.154 [2024-11-18 15:02:34.556881] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:11.154 [2024-11-18 15:02:34.556888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:11.154 [2024-11-18 15:02:34.556900] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:11.154 [2024-11-18 15:02:34.556907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:11.154 [2024-11-18 15:02:34.556914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:11.154 [2024-11-18 15:02:34.556921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:11.154 [2024-11-18 15:02:34.556927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:11.154 [2024-11-18 15:02:34.556934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.154 [2024-11-18 15:02:34.556943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:11.154 [2024-11-18 15:02:34.556951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:16:11.154 [2024-11-18 15:02:34.556958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.558188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.154 [2024-11-18 15:02:34.558203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:11.154 [2024-11-18 15:02:34.558213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:16:11.154 [2024-11-18 15:02:34.558221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.558284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.154 [2024-11-18 15:02:34.558293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:11.154 [2024-11-18 15:02:34.558301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:11.154 [2024-11-18 15:02:34.558307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.564787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.154 [2024-11-18 15:02:34.564818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.154 [2024-11-18 15:02:34.564827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.154 [2024-11-18 15:02:34.564835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.564912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.154 [2024-11-18 15:02:34.564921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.154 [2024-11-18 15:02:34.564930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.154 [2024-11-18 15:02:34.564942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.564978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.154 [2024-11-18 15:02:34.564988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.154 [2024-11-18 15:02:34.564997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.154 [2024-11-18 15:02:34.565008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.154 [2024-11-18 15:02:34.565025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.154 [2024-11-18 15:02:34.565036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.154 [2024-11-18 15:02:34.565043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.565054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.575854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.575894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.155 [2024-11-18 15:02:34.575904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.575912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.155 [2024-11-18 15:02:34.580605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.155 [2024-11-18 15:02:34.580671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.155 [2024-11-18 15:02:34.580727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.155 [2024-11-18 15:02:34.580822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:11.155 [2024-11-18 15:02:34.580877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.580937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.155 [2024-11-18 15:02:34.580944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.580952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.580997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:11.155 [2024-11-18 15:02:34.581006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.155 [2024-11-18 15:02:34.581018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:11.155 [2024-11-18 15:02:34.581026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.155 [2024-11-18 15:02:34.581173] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.953 ms, result 0 00:16:11.416 00:16:11.416 00:16:11.416 15:02:34 -- ftl/trim.sh@93 -- # svcpid=83600 00:16:11.416 15:02:34 -- ftl/trim.sh@94 -- # waitforlisten 83600 00:16:11.416 15:02:34 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:11.416 15:02:34 -- common/autotest_common.sh@829 -- # '[' -z 83600 ']' 00:16:11.416 15:02:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.416 15:02:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.416 15:02:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.416 15:02:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.416 15:02:34 -- common/autotest_common.sh@10 -- # set +x 00:16:11.416 [2024-11-18 15:02:34.893728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:11.416 [2024-11-18 15:02:34.893841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83600 ] 00:16:11.676 [2024-11-18 15:02:35.040341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.676 [2024-11-18 15:02:35.078782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.676 [2024-11-18 15:02:35.078992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.246 15:02:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.246 15:02:35 -- common/autotest_common.sh@862 -- # return 0 00:16:12.246 15:02:35 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:12.508 [2024-11-18 15:02:35.884307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:12.508 [2024-11-18 15:02:35.884387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:12.508 [2024-11-18 15:02:36.049587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.049651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.508 [2024-11-18 15:02:36.049668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:12.508 [2024-11-18 15:02:36.049676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.051974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.052011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.508 [2024-11-18 15:02:36.052027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:16:12.508 [2024-11-18 15:02:36.052035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.052105] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.508 [2024-11-18 15:02:36.052348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.508 [2024-11-18 15:02:36.052364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.052372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.508 [2024-11-18 15:02:36.052382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:12.508 [2024-11-18 15:02:36.052394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.053822] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:12.508 [2024-11-18 15:02:36.057128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.057166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:12.508 [2024-11-18 15:02:36.057176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:16:12.508 [2024-11-18 15:02:36.057201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.057259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.057273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:12.508 [2024-11-18 15:02:36.057285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:12.508 [2024-11-18 15:02:36.057294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.063928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.064098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.508 [2024-11-18 15:02:36.064115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.565 ms 00:16:12.508 [2024-11-18 15:02:36.064124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.064234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.064249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.508 [2024-11-18 15:02:36.064259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:12.508 [2024-11-18 15:02:36.064270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.064302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.064312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.508 [2024-11-18 15:02:36.064353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:12.508 [2024-11-18 15:02:36.064365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.064393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:12.508 [2024-11-18 15:02:36.066056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.066084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.508 [2024-11-18 15:02:36.066096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:16:12.508 [2024-11-18 15:02:36.066103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.066152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.066161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.508 [2024-11-18 15:02:36.066171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:12.508 [2024-11-18 15:02:36.066178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.066203] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:12.508 [2024-11-18 15:02:36.066223] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:12.508 [2024-11-18 15:02:36.066261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:12.508 [2024-11-18 15:02:36.066277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:12.508 [2024-11-18 15:02:36.066370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:12.508 [2024-11-18 15:02:36.066382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.508 [2024-11-18 15:02:36.066396] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:12.508 [2024-11-18 15:02:36.066408] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066424] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066433] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:12.508 [2024-11-18 15:02:36.066444] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.508 [2024-11-18 15:02:36.066451] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:12.508 [2024-11-18 15:02:36.066460] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:12.508 [2024-11-18 15:02:36.066469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.066478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.508 [2024-11-18 15:02:36.066486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:12.508 [2024-11-18 15:02:36.066495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.066563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.508 [2024-11-18 15:02:36.066573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.508 [2024-11-18 15:02:36.066582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:12.508 [2024-11-18 15:02:36.066593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.508 [2024-11-18 15:02:36.066700] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.508 [2024-11-18 15:02:36.066716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.508 [2024-11-18 15:02:36.066726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.508 [2024-11-18 15:02:36.066762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.508 [2024-11-18 15:02:36.066789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.508 [2024-11-18 15:02:36.066807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.508 [2024-11-18 15:02:36.066816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:12.508 [2024-11-18 15:02:36.066823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.508 [2024-11-18 15:02:36.066833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.508 [2024-11-18 15:02:36.066842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:12.508 [2024-11-18 15:02:36.066851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.508 [2024-11-18 15:02:36.066867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:12.508 [2024-11-18 15:02:36.066875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:12.508 [2024-11-18 15:02:36.066894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:12.508 [2024-11-18 15:02:36.066903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.508 [2024-11-18 15:02:36.066929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.508 [2024-11-18 15:02:36.066956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.508 [2024-11-18 15:02:36.066983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:12.508 [2024-11-18 15:02:36.066990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.508 [2024-11-18 15:02:36.066999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.508 [2024-11-18 15:02:36.067006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:12.508 [2024-11-18 15:02:36.067015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.509 [2024-11-18 15:02:36.067023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.509 [2024-11-18 15:02:36.067034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:12.509 [2024-11-18 15:02:36.067042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.509 [2024-11-18 15:02:36.067050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.509 [2024-11-18 15:02:36.067058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:12.509 [2024-11-18 15:02:36.067067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.509 [2024-11-18 15:02:36.067074] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.509 [2024-11-18 15:02:36.067084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.509 [2024-11-18 15:02:36.067093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.509 [2024-11-18 15:02:36.067103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.509 [2024-11-18 15:02:36.067114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.509 [2024-11-18 15:02:36.067124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.509 [2024-11-18 15:02:36.067131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.509 [2024-11-18 15:02:36.067140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.509 [2024-11-18 15:02:36.067148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.509 [2024-11-18 15:02:36.067159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.509 [2024-11-18 15:02:36.067168] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.509 [2024-11-18 15:02:36.067183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.509 [2024-11-18 15:02:36.067191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:12.509 [2024-11-18 15:02:36.067200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:12.509 [2024-11-18 15:02:36.067209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:12.509 [2024-11-18 15:02:36.067218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:12.509 [2024-11-18 15:02:36.067224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:12.509 [2024-11-18 15:02:36.067235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:12.509 [2024-11-18 15:02:36.067243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:12.509 [2024-11-18 15:02:36.067251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:12.509 [2024-11-18 15:02:36.067258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:12.509 [2024-11-18 15:02:36.067267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:12.509 [2024-11-18 15:02:36.067274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:12.509 [2024-11-18 15:02:36.067283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:12.509 [2024-11-18 15:02:36.067290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:12.509 [2024-11-18 15:02:36.067299] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.509 [2024-11-18 15:02:36.067308] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.509 [2024-11-18 15:02:36.067546] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.509 [2024-11-18 15:02:36.067592] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.509 [2024-11-18 15:02:36.067623] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.509 [2024-11-18 15:02:36.067652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.509 [2024-11-18 15:02:36.067684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.067705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.509 [2024-11-18 15:02:36.067728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:16:12.509 [2024-11-18 15:02:36.067786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.075711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.075831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:12.509 [2024-11-18 15:02:36.075896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.849 ms 00:16:12.509 [2024-11-18 15:02:36.075925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.076056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.076207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:12.509 [2024-11-18 15:02:36.076235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:12.509 [2024-11-18 15:02:36.076256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.087937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.088063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:12.509 [2024-11-18 15:02:36.088115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.632 ms 00:16:12.509 [2024-11-18 15:02:36.088138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.088217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.088242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:12.509 [2024-11-18 15:02:36.088264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:12.509 [2024-11-18 15:02:36.088284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.088715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.088763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:12.509 [2024-11-18 15:02:36.088790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:16:12.509 [2024-11-18 15:02:36.088810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.509 [2024-11-18 15:02:36.088944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.509 [2024-11-18 15:02:36.088970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:12.509 [2024-11-18 15:02:36.089034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:12.509 [2024-11-18 15:02:36.089060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.095981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.096079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:12.771 [2024-11-18 15:02:36.096128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:16:12.771 [2024-11-18 15:02:36.096154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.099410] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:12.771 [2024-11-18 15:02:36.099525] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:12.771 [2024-11-18 15:02:36.099584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.099601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:12.771 [2024-11-18 15:02:36.099612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:16:12.771 [2024-11-18 15:02:36.099619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.114598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.114725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:12.771 [2024-11-18 15:02:36.114746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.895 ms 00:16:12.771 [2024-11-18 15:02:36.114757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.116977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.117007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:12.771 [2024-11-18 15:02:36.117021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:16:12.771 [2024-11-18 15:02:36.117028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.118795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.118824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:12.771 [2024-11-18 15:02:36.118835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:16:12.771 [2024-11-18 15:02:36.118842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.119059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.119071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:12.771 [2024-11-18 15:02:36.119081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:12.771 [2024-11-18 15:02:36.119089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.140154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.140196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:12.771 [2024-11-18 15:02:36.140210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.039 ms 00:16:12.771 [2024-11-18 15:02:36.140221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.147923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:12.771 [2024-11-18 15:02:36.164526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.164573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:12.771 [2024-11-18 15:02:36.164587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:16:12.771 [2024-11-18 15:02:36.164601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.164691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.164703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:12.771 [2024-11-18 15:02:36.164712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:12.771 [2024-11-18 15:02:36.164722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.164777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.164787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:12.771 [2024-11-18 15:02:36.164802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:12.771 [2024-11-18 15:02:36.164811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.166077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.166240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:12.771 [2024-11-18 15:02:36.166255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:16:12.771 [2024-11-18 15:02:36.166265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.771 [2024-11-18 15:02:36.166302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.771 [2024-11-18 15:02:36.166331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:12.771 [2024-11-18 15:02:36.166340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:12.771 [2024-11-18 15:02:36.166351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-11-18 15:02:36.166390] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:12.772 [2024-11-18 15:02:36.166402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-11-18 15:02:36.166412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:12.772 [2024-11-18 15:02:36.166426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:12.772 [2024-11-18 15:02:36.166434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-11-18 15:02:36.170363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-11-18 15:02:36.170398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:12.772 [2024-11-18 15:02:36.170410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:16:12.772 [2024-11-18 15:02:36.170418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-11-18 15:02:36.170494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.772 [2024-11-18 15:02:36.170504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:12.772 [2024-11-18 15:02:36.170518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:12.772 [2024-11-18 15:02:36.170526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.772 [2024-11-18 15:02:36.171800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:12.772 [2024-11-18 15:02:36.172845] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.921 ms, result 0 00:16:12.772 [2024-11-18 15:02:36.174799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:12.772 Some configs were skipped because the RPC state that can call them passed over. 00:16:12.772 15:02:36 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:13.033 [2024-11-18 15:02:36.391178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-11-18 15:02:36.391421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:13.033 [2024-11-18 15:02:36.391486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.280 ms 00:16:13.033 [2024-11-18 15:02:36.391512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-11-18 15:02:36.391564] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.676 ms, result 0 00:16:13.033 true 00:16:13.033 15:02:36 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:13.033 [2024-11-18 15:02:36.583143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.033 [2024-11-18 15:02:36.583364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:13.033 [2024-11-18 15:02:36.583428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:16:13.033 [2024-11-18 15:02:36.583451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.033 [2024-11-18 15:02:36.583509] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.112 ms, result 0 00:16:13.033 true 00:16:13.033 15:02:36 -- ftl/trim.sh@102 -- # killprocess 83600 00:16:13.033 15:02:36 -- common/autotest_common.sh@936 -- # '[' -z 83600 ']' 00:16:13.033 15:02:36 -- common/autotest_common.sh@940 -- # kill -0 83600 00:16:13.033 15:02:36 -- common/autotest_common.sh@941 -- # uname 00:16:13.033 15:02:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.033 15:02:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83600 00:16:13.294 15:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.294 15:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.294 killing process with pid 83600 00:16:13.294 15:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83600' 00:16:13.294 15:02:36 -- common/autotest_common.sh@955 -- # kill 83600 00:16:13.294 15:02:36 -- common/autotest_common.sh@960 -- # wait 83600 00:16:13.294 [2024-11-18 15:02:36.750764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.750831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:13.294 [2024-11-18 15:02:36.750844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:13.294 [2024-11-18 15:02:36.750855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.750878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:13.294 [2024-11-18 15:02:36.751452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.751483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:13.294 [2024-11-18 15:02:36.751494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:13.294 [2024-11-18 15:02:36.751506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.751812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.751831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:13.294 [2024-11-18 15:02:36.751851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:13.294 [2024-11-18 15:02:36.751860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.756209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.756238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:13.294 [2024-11-18 15:02:36.756250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:16:13.294 [2024-11-18 15:02:36.756258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.763550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.763579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:13.294 [2024-11-18 15:02:36.763593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.254 ms 00:16:13.294 [2024-11-18 15:02:36.763607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.765831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.765981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:13.294 [2024-11-18 15:02:36.765999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.153 ms 00:16:13.294 [2024-11-18 15:02:36.766006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.769992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.770087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:13.294 [2024-11-18 15:02:36.770141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.947 ms 00:16:13.294 [2024-11-18 15:02:36.770171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.770388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.770440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:13.294 [2024-11-18 15:02:36.770470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:13.294 [2024-11-18 15:02:36.770493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.773468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.773595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:13.294 [2024-11-18 15:02:36.773656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:16:13.294 [2024-11-18 15:02:36.773679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.775986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.776085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:13.294 [2024-11-18 15:02:36.776134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:16:13.294 [2024-11-18 15:02:36.776155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.777919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.778016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:13.294 [2024-11-18 15:02:36.778063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:16:13.294 [2024-11-18 15:02:36.778085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.779551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.294 [2024-11-18 15:02:36.779642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:13.294 [2024-11-18 15:02:36.779690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:16:13.294 [2024-11-18 15:02:36.779710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.294 [2024-11-18 15:02:36.779783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:13.294 [2024-11-18 15:02:36.779821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.779859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.779932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.779966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.779994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.780953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.781022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.781053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.781082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.781112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:13.294 [2024-11-18 15:02:36.781143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.781993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.782988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:13.295 [2024-11-18 15:02:36.783699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:13.295 [2024-11-18 15:02:36.783709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:13.295 [2024-11-18 15:02:36.783717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:13.295 [2024-11-18 15:02:36.783726] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:13.295 [2024-11-18 15:02:36.783733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:13.295 [2024-11-18 15:02:36.783742] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:13.295 [2024-11-18 15:02:36.783749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:13.295 [2024-11-18 15:02:36.783763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:13.295 [2024-11-18 15:02:36.783770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:13.295 [2024-11-18 15:02:36.783778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:13.295 [2024-11-18 15:02:36.783785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:13.295 [2024-11-18 15:02:36.783794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.295 [2024-11-18 15:02:36.783801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:13.295 [2024-11-18 15:02:36.783813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:16:13.295 [2024-11-18 15:02:36.783824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.295 [2024-11-18 15:02:36.785706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.295 [2024-11-18 15:02:36.785738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:13.295 [2024-11-18 15:02:36.785750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:16:13.295 [2024-11-18 15:02:36.785762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.295 [2024-11-18 15:02:36.785833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.295 [2024-11-18 15:02:36.785842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:13.295 [2024-11-18 15:02:36.785854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:13.296 [2024-11-18 15:02:36.785863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.792493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.792610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.296 [2024-11-18 15:02:36.792628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.792639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.792704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.792718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.296 [2024-11-18 15:02:36.792730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.792738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.792782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.792792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.296 [2024-11-18 15:02:36.792802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.792810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.792833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.792842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.296 [2024-11-18 15:02:36.792851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.792859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.805364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.805412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.296 [2024-11-18 15:02:36.805428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.805438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.296 [2024-11-18 15:02:36.810212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:13.296 [2024-11-18 15:02:36.810286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:13.296 [2024-11-18 15:02:36.810366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:13.296 [2024-11-18 15:02:36.810467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:13.296 [2024-11-18 15:02:36.810539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:13.296 [2024-11-18 15:02:36.810615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:13.296 [2024-11-18 15:02:36.810706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:13.296 [2024-11-18 15:02:36.810717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:13.296 [2024-11-18 15:02:36.810724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.296 [2024-11-18 15:02:36.810873] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.081 ms, result 0 00:16:13.556 15:02:37 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:13.556 [2024-11-18 15:02:37.064819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:13.556 [2024-11-18 15:02:37.065099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83636 ] 00:16:13.816 [2024-11-18 15:02:37.210070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.816 [2024-11-18 15:02:37.248786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.816 [2024-11-18 15:02:37.344558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:13.816 [2024-11-18 15:02:37.344629] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:14.076 [2024-11-18 15:02:37.489543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.489767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:14.076 [2024-11-18 15:02:37.489787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:14.076 [2024-11-18 15:02:37.489800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.491662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.491695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.076 [2024-11-18 15:02:37.491703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:16:14.076 [2024-11-18 15:02:37.491709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.491766] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:14.076 [2024-11-18 15:02:37.491958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:14.076 [2024-11-18 15:02:37.491969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.491976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.076 [2024-11-18 15:02:37.491984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:14.076 [2024-11-18 15:02:37.491993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.493309] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:14.076 [2024-11-18 15:02:37.495802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.495833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:14.076 [2024-11-18 15:02:37.495842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:16:14.076 [2024-11-18 15:02:37.495848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.495900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.495908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:14.076 [2024-11-18 15:02:37.495920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:14.076 [2024-11-18 15:02:37.495927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.502018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.502046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.076 [2024-11-18 15:02:37.502054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:16:14.076 [2024-11-18 15:02:37.502061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.502139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.502147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.076 [2024-11-18 15:02:37.502158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:14.076 [2024-11-18 15:02:37.502164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.502184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.502191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:14.076 [2024-11-18 15:02:37.502197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:14.076 [2024-11-18 15:02:37.502203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.502227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:14.076 [2024-11-18 15:02:37.503790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.503816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.076 [2024-11-18 15:02:37.503824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:16:14.076 [2024-11-18 15:02:37.503830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.503870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.503881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:14.076 [2024-11-18 15:02:37.503888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:14.076 [2024-11-18 15:02:37.503894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.503910] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:14.076 [2024-11-18 15:02:37.503928] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:14.076 [2024-11-18 15:02:37.503957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:14.076 [2024-11-18 15:02:37.503972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:14.076 [2024-11-18 15:02:37.504032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:14.076 [2024-11-18 15:02:37.504043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:14.076 [2024-11-18 15:02:37.504051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:14.076 [2024-11-18 15:02:37.504059] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:14.076 [2024-11-18 15:02:37.504067] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:14.076 [2024-11-18 15:02:37.504073] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:14.076 [2024-11-18 15:02:37.504079] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:14.076 [2024-11-18 15:02:37.504090] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:14.076 [2024-11-18 15:02:37.504097] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:14.076 [2024-11-18 15:02:37.504103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.504110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:14.076 [2024-11-18 15:02:37.504116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:14.076 [2024-11-18 15:02:37.504125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.504180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.076 [2024-11-18 15:02:37.504188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:14.076 [2024-11-18 15:02:37.504194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:14.076 [2024-11-18 15:02:37.504206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.076 [2024-11-18 15:02:37.504264] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:14.076 [2024-11-18 15:02:37.504272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:14.076 [2024-11-18 15:02:37.504278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.076 [2024-11-18 15:02:37.504285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.076 [2024-11-18 15:02:37.504292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:14.076 [2024-11-18 15:02:37.504297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:14.076 [2024-11-18 15:02:37.504303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:14.076 [2024-11-18 15:02:37.504309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:14.076 [2024-11-18 15:02:37.504345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:14.076 [2024-11-18 15:02:37.504351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.076 [2024-11-18 15:02:37.504357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:14.076 [2024-11-18 15:02:37.504369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:14.076 [2024-11-18 15:02:37.504374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.076 [2024-11-18 15:02:37.504380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:14.076 [2024-11-18 15:02:37.504385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:14.076 [2024-11-18 15:02:37.504393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.076 [2024-11-18 15:02:37.504398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:14.076 [2024-11-18 15:02:37.504404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:14.076 [2024-11-18 15:02:37.504411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:14.077 [2024-11-18 15:02:37.504422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:14.077 [2024-11-18 15:02:37.504428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:14.077 [2024-11-18 15:02:37.504440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:14.077 [2024-11-18 15:02:37.504458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:14.077 [2024-11-18 15:02:37.504476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:14.077 [2024-11-18 15:02:37.504497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:14.077 [2024-11-18 15:02:37.504516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.077 [2024-11-18 15:02:37.504527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:14.077 [2024-11-18 15:02:37.504533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:14.077 [2024-11-18 15:02:37.504538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.077 [2024-11-18 15:02:37.504544] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:14.077 [2024-11-18 15:02:37.504552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:14.077 [2024-11-18 15:02:37.504561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.077 [2024-11-18 15:02:37.504577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:14.077 [2024-11-18 15:02:37.504583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:14.077 [2024-11-18 15:02:37.504590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:14.077 [2024-11-18 15:02:37.504598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:14.077 [2024-11-18 15:02:37.504604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:14.077 [2024-11-18 15:02:37.504610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:14.077 [2024-11-18 15:02:37.504618] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:14.077 [2024-11-18 15:02:37.504626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.077 [2024-11-18 15:02:37.504636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:14.077 [2024-11-18 15:02:37.504643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:14.077 [2024-11-18 15:02:37.504649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:14.077 [2024-11-18 15:02:37.504655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:14.077 [2024-11-18 15:02:37.504661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:14.077 [2024-11-18 15:02:37.504668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:14.077 [2024-11-18 15:02:37.504675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:14.077 [2024-11-18 15:02:37.504681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:14.077 [2024-11-18 15:02:37.504687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:14.077 [2024-11-18 15:02:37.504693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:14.077 [2024-11-18 15:02:37.504699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:14.077 [2024-11-18 15:02:37.504707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:14.077 [2024-11-18 15:02:37.504715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:14.077 [2024-11-18 15:02:37.504721] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:14.077 [2024-11-18 15:02:37.504728] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.077 [2024-11-18 15:02:37.504734] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:14.077 [2024-11-18 15:02:37.504741] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:14.077 [2024-11-18 15:02:37.504748] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:14.077 [2024-11-18 15:02:37.504755] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:14.077 [2024-11-18 15:02:37.504761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.504768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:14.077 [2024-11-18 15:02:37.504776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:16:14.077 [2024-11-18 15:02:37.504782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.512188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.512309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.077 [2024-11-18 15:02:37.512374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.366 ms 00:16:14.077 [2024-11-18 15:02:37.512397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.512498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.512594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:14.077 [2024-11-18 15:02:37.512623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:14.077 [2024-11-18 15:02:37.512638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.539247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.539439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.077 [2024-11-18 15:02:37.539506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.574 ms 00:16:14.077 [2024-11-18 15:02:37.539535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.539643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.539672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.077 [2024-11-18 15:02:37.539694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:14.077 [2024-11-18 15:02:37.539714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.540129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.540228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.077 [2024-11-18 15:02:37.540280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:16:14.077 [2024-11-18 15:02:37.540311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.540494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.540591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.077 [2024-11-18 15:02:37.540644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:14.077 [2024-11-18 15:02:37.540667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.547445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.547550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.077 [2024-11-18 15:02:37.547602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:16:14.077 [2024-11-18 15:02:37.547625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.550723] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:14.077 [2024-11-18 15:02:37.550841] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:14.077 [2024-11-18 15:02:37.550899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.550920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:14.077 [2024-11-18 15:02:37.550982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:16:14.077 [2024-11-18 15:02:37.551027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.562853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.562999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:14.077 [2024-11-18 15:02:37.563070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.499 ms 00:16:14.077 [2024-11-18 15:02:37.563107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.565093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.565144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:14.077 [2024-11-18 15:02:37.565153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:16:14.077 [2024-11-18 15:02:37.565160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.077 [2024-11-18 15:02:37.566596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.077 [2024-11-18 15:02:37.566630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:14.077 [2024-11-18 15:02:37.566638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:16:14.078 [2024-11-18 15:02:37.566644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.566821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.566830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:14.078 [2024-11-18 15:02:37.566843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:14.078 [2024-11-18 15:02:37.566848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.585378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.585548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:14.078 [2024-11-18 15:02:37.585571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.512 ms 00:16:14.078 [2024-11-18 15:02:37.585579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.591547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:14.078 [2024-11-18 15:02:37.606786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.606826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:14.078 [2024-11-18 15:02:37.606838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.128 ms 00:16:14.078 [2024-11-18 15:02:37.606844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.606931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.606943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:14.078 [2024-11-18 15:02:37.606950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:14.078 [2024-11-18 15:02:37.606956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.607003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.607010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:14.078 [2024-11-18 15:02:37.607016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:14.078 [2024-11-18 15:02:37.607028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.608048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.608079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:14.078 [2024-11-18 15:02:37.608090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:16:14.078 [2024-11-18 15:02:37.608095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.608127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.608135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:14.078 [2024-11-18 15:02:37.608144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:14.078 [2024-11-18 15:02:37.608150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.608180] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:14.078 [2024-11-18 15:02:37.608189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.608195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:14.078 [2024-11-18 15:02:37.608201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:14.078 [2024-11-18 15:02:37.608210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.611894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.612041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:14.078 [2024-11-18 15:02:37.612061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.667 ms 00:16:14.078 [2024-11-18 15:02:37.612068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.612131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.078 [2024-11-18 15:02:37.612139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:14.078 [2024-11-18 15:02:37.612146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:14.078 [2024-11-18 15:02:37.612153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.078 [2024-11-18 15:02:37.613039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.078 [2024-11-18 15:02:37.613883] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.251 ms, result 0 00:16:14.078 [2024-11-18 15:02:37.614437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.078 [2024-11-18 15:02:37.623695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.459  [2024-11-18T15:02:39.982Z] Copying: 42/256 [MB] (42 MBps) [2024-11-18T15:02:40.917Z] Copying: 84/256 [MB] (42 MBps) [2024-11-18T15:02:41.860Z] Copying: 125/256 [MB] (41 MBps) [2024-11-18T15:02:42.798Z] Copying: 162/256 [MB] (36 MBps) [2024-11-18T15:02:43.740Z] Copying: 205/256 [MB] (42 MBps) [2024-11-18T15:02:44.309Z] Copying: 241/256 [MB] (36 MBps) [2024-11-18T15:02:45.252Z] Copying: 256/256 [MB] (average 40 MBps)[2024-11-18 15:02:45.213395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.662 [2024-11-18 15:02:45.214845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.662 [2024-11-18 15:02:45.214884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:21.662 [2024-11-18 15:02:45.214904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:21.662 [2024-11-18 15:02:45.214913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.662 [2024-11-18 15:02:45.214936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:21.662 [2024-11-18 15:02:45.215654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.662 [2024-11-18 15:02:45.215712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:21.662 [2024-11-18 15:02:45.215736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:16:21.662 [2024-11-18 15:02:45.215863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.662 [2024-11-18 15:02:45.216161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.662 [2024-11-18 15:02:45.216194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:21.662 [2024-11-18 15:02:45.216215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:16:21.662 [2024-11-18 15:02:45.216274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.662 [2024-11-18 15:02:45.219993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.662 [2024-11-18 15:02:45.220088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:21.662 [2024-11-18 15:02:45.220145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:16:21.662 [2024-11-18 15:02:45.220155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.662 [2024-11-18 15:02:45.227855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.662 [2024-11-18 15:02:45.227900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:21.663 [2024-11-18 15:02:45.227911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.659 ms 00:16:21.663 [2024-11-18 15:02:45.227921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.229606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.229639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:21.663 [2024-11-18 15:02:45.229649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:16:21.663 [2024-11-18 15:02:45.229657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.234336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.234367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:21.663 [2024-11-18 15:02:45.234377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:16:21.663 [2024-11-18 15:02:45.234395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.234522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.234533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:21.663 [2024-11-18 15:02:45.234542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:21.663 [2024-11-18 15:02:45.234549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.236718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.236749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:21.663 [2024-11-18 15:02:45.236758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:16:21.663 [2024-11-18 15:02:45.236764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.238340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.238366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:21.663 [2024-11-18 15:02:45.238374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:16:21.663 [2024-11-18 15:02:45.238381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.239640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.239670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:21.663 [2024-11-18 15:02:45.239678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:16:21.663 [2024-11-18 15:02:45.239685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.240797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.663 [2024-11-18 15:02:45.240828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:21.663 [2024-11-18 15:02:45.240836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:16:21.663 [2024-11-18 15:02:45.240843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.663 [2024-11-18 15:02:45.240862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:21.663 [2024-11-18 15:02:45.240877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.240995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:21.663 [2024-11-18 15:02:45.241216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:21.664 [2024-11-18 15:02:45.241663] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:21.664 [2024-11-18 15:02:45.241671] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33d3673d-fe43-4b75-9544-654cede6e0bc 00:16:21.664 [2024-11-18 15:02:45.241679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:21.664 [2024-11-18 15:02:45.241699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:21.664 [2024-11-18 15:02:45.241708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:21.664 [2024-11-18 15:02:45.241716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:21.664 [2024-11-18 15:02:45.241730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:21.664 [2024-11-18 15:02:45.241738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:21.664 [2024-11-18 15:02:45.241745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:21.664 [2024-11-18 15:02:45.241752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:21.664 [2024-11-18 15:02:45.241759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:21.664 [2024-11-18 15:02:45.241765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.664 [2024-11-18 15:02:45.241773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:21.664 [2024-11-18 15:02:45.241784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:16:21.664 [2024-11-18 15:02:45.241791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.664 [2024-11-18 15:02:45.243589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.664 [2024-11-18 15:02:45.243613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:21.664 [2024-11-18 15:02:45.243623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:16:21.664 [2024-11-18 15:02:45.243631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.664 [2024-11-18 15:02:45.243702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.664 [2024-11-18 15:02:45.243711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:21.664 [2024-11-18 15:02:45.243722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:21.664 [2024-11-18 15:02:45.243730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.250155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.250266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.926 [2024-11-18 15:02:45.250327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.250356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.250465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.250528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.926 [2024-11-18 15:02:45.250553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.250572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.250734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.250796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.926 [2024-11-18 15:02:45.250842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.250866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.250905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.250989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.926 [2024-11-18 15:02:45.251025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.251044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.263564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.263701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.926 [2024-11-18 15:02:45.263753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.263776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.268435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.268546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.926 [2024-11-18 15:02:45.268629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.268652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.268702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.268724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.926 [2024-11-18 15:02:45.268744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.268762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.268804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.268823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.926 [2024-11-18 15:02:45.268842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.268901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.268989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.269018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.926 [2024-11-18 15:02:45.269037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.269055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.269099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.269123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:21.926 [2024-11-18 15:02:45.269142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.269164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.269219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.269241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.926 [2024-11-18 15:02:45.269261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.269279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.269367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.926 [2024-11-18 15:02:45.269395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.926 [2024-11-18 15:02:45.269415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.926 [2024-11-18 15:02:45.269437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.926 [2024-11-18 15:02:45.269586] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.718 ms, result 0 00:16:21.926 00:16:21.926 00:16:21.926 15:02:45 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:22.498 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:22.498 15:02:46 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:22.498 15:02:46 -- ftl/trim.sh@109 -- # fio_kill 00:16:22.498 15:02:46 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:22.498 15:02:46 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:22.498 15:02:46 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:22.759 15:02:46 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:22.759 15:02:46 -- ftl/trim.sh@20 -- # killprocess 83600 00:16:22.759 15:02:46 -- common/autotest_common.sh@936 -- # '[' -z 83600 ']' 00:16:22.759 Process with pid 83600 is not found 00:16:22.759 15:02:46 -- common/autotest_common.sh@940 -- # kill -0 83600 00:16:22.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83600) - No such process 00:16:22.759 15:02:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83600 is not found' 00:16:22.759 ************************************ 00:16:22.760 END TEST ftl_trim 00:16:22.760 ************************************ 00:16:22.760 00:16:22.760 real 0m41.723s 00:16:22.760 user 1m2.604s 00:16:22.760 sys 0m5.154s 00:16:22.760 15:02:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:22.760 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 15:02:46 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:22.760 15:02:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.760 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 ************************************ 00:16:22.760 START TEST ftl_restore 00:16:22.760 ************************************ 00:16:22.760 15:02:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:22.760 * Looking for test storage... 00:16:22.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 15:02:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:22.760 15:02:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:22.760 15:02:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:22.760 15:02:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:22.760 15:02:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:22.760 15:02:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:22.760 15:02:46 -- scripts/common.sh@335 -- # IFS=.-: 00:16:22.760 15:02:46 -- scripts/common.sh@335 -- # read -ra ver1 00:16:22.760 15:02:46 -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.760 15:02:46 -- scripts/common.sh@336 -- # read -ra ver2 00:16:22.760 15:02:46 -- scripts/common.sh@337 -- # local 'op=<' 00:16:22.760 15:02:46 -- scripts/common.sh@339 -- # ver1_l=2 00:16:22.760 15:02:46 -- scripts/common.sh@340 -- # ver2_l=1 00:16:22.760 15:02:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:22.760 15:02:46 -- scripts/common.sh@343 -- # case "$op" in 00:16:22.760 15:02:46 -- scripts/common.sh@344 -- # : 1 00:16:22.760 15:02:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:22.760 15:02:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.760 15:02:46 -- scripts/common.sh@364 -- # decimal 1 00:16:22.760 15:02:46 -- scripts/common.sh@352 -- # local d=1 00:16:22.760 15:02:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.760 15:02:46 -- scripts/common.sh@354 -- # echo 1 00:16:22.760 15:02:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:22.760 15:02:46 -- scripts/common.sh@365 -- # decimal 2 00:16:22.760 15:02:46 -- scripts/common.sh@352 -- # local d=2 00:16:22.760 15:02:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.760 15:02:46 -- scripts/common.sh@354 -- # echo 2 00:16:22.760 15:02:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:22.760 15:02:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:22.760 15:02:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:22.760 15:02:46 -- scripts/common.sh@367 -- # return 0 00:16:22.760 15:02:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 15:02:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:22.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.760 --rc genhtml_branch_coverage=1 00:16:22.760 --rc genhtml_function_coverage=1 00:16:22.760 --rc genhtml_legend=1 00:16:22.760 --rc geninfo_all_blocks=1 00:16:22.760 --rc geninfo_unexecuted_blocks=1 00:16:22.760 00:16:22.760 ' 00:16:22.760 15:02:46 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:22.760 15:02:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:22.760 15:02:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 15:02:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.760 15:02:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:22.760 15:02:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:22.760 15:02:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.760 15:02:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:22.760 15:02:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:22.760 15:02:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 15:02:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 15:02:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:22.760 15:02:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:22.760 15:02:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.760 15:02:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.760 15:02:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:22.760 15:02:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:22.760 15:02:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 15:02:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.760 15:02:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:22.760 15:02:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:22.760 15:02:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.760 15:02:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.760 15:02:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.760 15:02:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.760 15:02:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:22.760 15:02:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:22.760 15:02:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.760 15:02:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.760 15:02:46 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.760 15:02:46 -- ftl/restore.sh@13 -- # mktemp -d 00:16:22.760 15:02:46 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.h2vITH1F70 00:16:22.760 15:02:46 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:22.760 15:02:46 -- ftl/restore.sh@16 -- # case $opt in 00:16:22.760 15:02:46 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:16:22.760 15:02:46 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:22.760 15:02:46 -- ftl/restore.sh@23 -- # shift 2 00:16:22.760 15:02:46 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:16:22.760 15:02:46 -- ftl/restore.sh@25 -- # timeout=240 00:16:22.760 15:02:46 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:22.760 15:02:46 -- ftl/restore.sh@39 -- # svcpid=83794 00:16:22.760 15:02:46 -- ftl/restore.sh@41 -- # waitforlisten 83794 00:16:22.760 15:02:46 -- common/autotest_common.sh@829 -- # '[' -z 83794 ']' 00:16:22.760 15:02:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.760 15:02:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.760 15:02:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.760 15:02:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.760 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 15:02:46 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:23.021 [2024-11-18 15:02:46.404161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:23.021 [2024-11-18 15:02:46.404501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83794 ] 00:16:23.021 [2024-11-18 15:02:46.555155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.021 [2024-11-18 15:02:46.596676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.021 [2024-11-18 15:02:46.596888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.962 15:02:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.962 15:02:47 -- common/autotest_common.sh@862 -- # return 0 00:16:23.962 15:02:47 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:23.962 15:02:47 -- ftl/common.sh@54 -- # local name=nvme0 00:16:23.962 15:02:47 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:23.962 15:02:47 -- ftl/common.sh@56 -- # local size=103424 00:16:23.962 15:02:47 -- ftl/common.sh@59 -- # local base_bdev 00:16:23.962 15:02:47 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:23.962 15:02:47 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:23.962 15:02:47 -- ftl/common.sh@62 -- # local base_size 00:16:23.962 15:02:47 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:23.962 15:02:47 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:23.962 15:02:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:23.962 15:02:47 -- common/autotest_common.sh@1369 -- # local bs 00:16:23.962 15:02:47 -- common/autotest_common.sh@1370 -- # local nb 00:16:23.962 15:02:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.222 15:02:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:24.222 { 00:16:24.222 "name": "nvme0n1", 00:16:24.222 "aliases": [ 00:16:24.222 "a22fce82-5c1a-46e4-8416-a67e5d400bd6" 00:16:24.222 ], 00:16:24.222 "product_name": "NVMe disk", 00:16:24.223 "block_size": 4096, 00:16:24.223 "num_blocks": 1310720, 00:16:24.223 "uuid": "a22fce82-5c1a-46e4-8416-a67e5d400bd6", 00:16:24.223 "assigned_rate_limits": { 00:16:24.223 "rw_ios_per_sec": 0, 00:16:24.223 "rw_mbytes_per_sec": 0, 00:16:24.223 "r_mbytes_per_sec": 0, 00:16:24.223 "w_mbytes_per_sec": 0 00:16:24.223 }, 00:16:24.223 "claimed": true, 00:16:24.223 "claim_type": "read_many_write_one", 00:16:24.223 "zoned": false, 00:16:24.223 "supported_io_types": { 00:16:24.223 "read": true, 00:16:24.223 "write": true, 00:16:24.223 "unmap": true, 00:16:24.223 "write_zeroes": true, 00:16:24.223 "flush": true, 00:16:24.223 "reset": true, 00:16:24.223 "compare": true, 00:16:24.223 "compare_and_write": false, 00:16:24.223 "abort": true, 00:16:24.223 "nvme_admin": true, 00:16:24.223 "nvme_io": true 00:16:24.223 }, 00:16:24.223 "driver_specific": { 00:16:24.223 "nvme": [ 00:16:24.223 { 00:16:24.223 "pci_address": "0000:00:07.0", 00:16:24.223 "trid": { 00:16:24.223 "trtype": "PCIe", 00:16:24.223 "traddr": "0000:00:07.0" 00:16:24.223 }, 00:16:24.223 "ctrlr_data": { 00:16:24.223 "cntlid": 0, 00:16:24.223 "vendor_id": "0x1b36", 00:16:24.223 "model_number": "QEMU NVMe Ctrl", 00:16:24.223 "serial_number": "12341", 00:16:24.223 "firmware_revision": "8.0.0", 00:16:24.223 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.223 "oacs": { 00:16:24.223 "security": 0, 00:16:24.223 "format": 1, 00:16:24.223 "firmware": 0, 00:16:24.223 "ns_manage": 1 00:16:24.223 }, 00:16:24.223 "multi_ctrlr": false, 00:16:24.223 "ana_reporting": false 00:16:24.223 }, 00:16:24.223 "vs": { 00:16:24.223 "nvme_version": "1.4" 00:16:24.223 }, 00:16:24.223 "ns_data": { 00:16:24.223 "id": 1, 00:16:24.223 "can_share": false 00:16:24.223 } 00:16:24.223 } 00:16:24.223 ], 00:16:24.223 "mp_policy": "active_passive" 00:16:24.223 } 00:16:24.223 } 00:16:24.223 ]' 00:16:24.223 15:02:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:24.223 15:02:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:24.223 15:02:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:24.223 15:02:47 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:24.223 15:02:47 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:24.223 15:02:47 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:24.223 15:02:47 -- ftl/common.sh@63 -- # base_size=5120 00:16:24.223 15:02:47 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.223 15:02:47 -- ftl/common.sh@67 -- # clear_lvols 00:16:24.223 15:02:47 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.223 15:02:47 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:24.482 15:02:47 -- ftl/common.sh@28 -- # stores=8fa4a156-d367-4ea3-bc0d-73e5659e3a6e 00:16:24.482 15:02:47 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:24.482 15:02:47 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fa4a156-d367-4ea3-bc0d-73e5659e3a6e 00:16:24.740 15:02:48 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:24.740 15:02:48 -- ftl/common.sh@68 -- # lvs=3e21963b-1bb6-4fce-a624-2c6dd78fec4f 00:16:24.740 15:02:48 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3e21963b-1bb6-4fce-a624-2c6dd78fec4f 00:16:24.998 15:02:48 -- ftl/restore.sh@43 -- # split_bdev=9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:24.998 15:02:48 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:16:24.998 15:02:48 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:24.998 15:02:48 -- ftl/common.sh@35 -- # local name=nvc0 00:16:24.998 15:02:48 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:24.998 15:02:48 -- ftl/common.sh@37 -- # local base_bdev=9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:24.998 15:02:48 -- ftl/common.sh@38 -- # local cache_size= 00:16:24.998 15:02:48 -- ftl/common.sh@41 -- # get_bdev_size 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:24.998 15:02:48 -- common/autotest_common.sh@1367 -- # local bdev_name=9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:24.998 15:02:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:24.998 15:02:48 -- common/autotest_common.sh@1369 -- # local bs 00:16:24.998 15:02:48 -- common/autotest_common.sh@1370 -- # local nb 00:16:24.998 15:02:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:25.255 15:02:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:25.255 { 00:16:25.255 "name": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:25.255 "aliases": [ 00:16:25.255 "lvs/nvme0n1p0" 00:16:25.255 ], 00:16:25.255 "product_name": "Logical Volume", 00:16:25.255 "block_size": 4096, 00:16:25.255 "num_blocks": 26476544, 00:16:25.255 "uuid": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:25.255 "assigned_rate_limits": { 00:16:25.255 "rw_ios_per_sec": 0, 00:16:25.255 "rw_mbytes_per_sec": 0, 00:16:25.255 "r_mbytes_per_sec": 0, 00:16:25.255 "w_mbytes_per_sec": 0 00:16:25.255 }, 00:16:25.255 "claimed": false, 00:16:25.255 "zoned": false, 00:16:25.255 "supported_io_types": { 00:16:25.255 "read": true, 00:16:25.255 "write": true, 00:16:25.255 "unmap": true, 00:16:25.255 "write_zeroes": true, 00:16:25.255 "flush": false, 00:16:25.255 "reset": true, 00:16:25.255 "compare": false, 00:16:25.255 "compare_and_write": false, 00:16:25.255 "abort": false, 00:16:25.255 "nvme_admin": false, 00:16:25.255 "nvme_io": false 00:16:25.255 }, 00:16:25.255 "driver_specific": { 00:16:25.255 "lvol": { 00:16:25.255 "lvol_store_uuid": "3e21963b-1bb6-4fce-a624-2c6dd78fec4f", 00:16:25.255 "base_bdev": "nvme0n1", 00:16:25.255 "thin_provision": true, 00:16:25.255 "snapshot": false, 00:16:25.255 "clone": false, 00:16:25.255 "esnap_clone": false 00:16:25.255 } 00:16:25.255 } 00:16:25.255 } 00:16:25.255 ]' 00:16:25.255 15:02:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:25.255 15:02:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:25.255 15:02:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:25.255 15:02:48 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:25.255 15:02:48 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:25.255 15:02:48 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:25.255 15:02:48 -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.255 15:02:48 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.255 15:02:48 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:25.513 15:02:49 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:25.513 15:02:49 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:25.513 15:02:49 -- ftl/common.sh@48 -- # get_bdev_size 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:25.513 15:02:49 -- common/autotest_common.sh@1367 -- # local bdev_name=9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:25.513 15:02:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:25.513 15:02:49 -- common/autotest_common.sh@1369 -- # local bs 00:16:25.513 15:02:49 -- common/autotest_common.sh@1370 -- # local nb 00:16:25.513 15:02:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:25.770 15:02:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:25.770 { 00:16:25.770 "name": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:25.770 "aliases": [ 00:16:25.770 "lvs/nvme0n1p0" 00:16:25.770 ], 00:16:25.770 "product_name": "Logical Volume", 00:16:25.770 "block_size": 4096, 00:16:25.770 "num_blocks": 26476544, 00:16:25.770 "uuid": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:25.770 "assigned_rate_limits": { 00:16:25.770 "rw_ios_per_sec": 0, 00:16:25.770 "rw_mbytes_per_sec": 0, 00:16:25.770 "r_mbytes_per_sec": 0, 00:16:25.770 "w_mbytes_per_sec": 0 00:16:25.770 }, 00:16:25.770 "claimed": false, 00:16:25.770 "zoned": false, 00:16:25.770 "supported_io_types": { 00:16:25.770 "read": true, 00:16:25.770 "write": true, 00:16:25.770 "unmap": true, 00:16:25.770 "write_zeroes": true, 00:16:25.770 "flush": false, 00:16:25.770 "reset": true, 00:16:25.770 "compare": false, 00:16:25.770 "compare_and_write": false, 00:16:25.770 "abort": false, 00:16:25.770 "nvme_admin": false, 00:16:25.770 "nvme_io": false 00:16:25.770 }, 00:16:25.770 "driver_specific": { 00:16:25.770 "lvol": { 00:16:25.770 "lvol_store_uuid": "3e21963b-1bb6-4fce-a624-2c6dd78fec4f", 00:16:25.770 "base_bdev": "nvme0n1", 00:16:25.770 "thin_provision": true, 00:16:25.770 "snapshot": false, 00:16:25.770 "clone": false, 00:16:25.770 "esnap_clone": false 00:16:25.770 } 00:16:25.770 } 00:16:25.770 } 00:16:25.770 ]' 00:16:25.770 15:02:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:25.770 15:02:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:25.770 15:02:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:25.770 15:02:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:25.770 15:02:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:25.770 15:02:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:25.770 15:02:49 -- ftl/common.sh@48 -- # cache_size=5171 00:16:25.770 15:02:49 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.028 15:02:49 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:26.028 15:02:49 -- ftl/restore.sh@48 -- # get_bdev_size 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:26.028 15:02:49 -- common/autotest_common.sh@1367 -- # local bdev_name=9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:26.028 15:02:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:26.028 15:02:49 -- common/autotest_common.sh@1369 -- # local bs 00:16:26.028 15:02:49 -- common/autotest_common.sh@1370 -- # local nb 00:16:26.028 15:02:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 00:16:26.286 15:02:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:26.286 { 00:16:26.286 "name": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:26.286 "aliases": [ 00:16:26.286 "lvs/nvme0n1p0" 00:16:26.286 ], 00:16:26.286 "product_name": "Logical Volume", 00:16:26.286 "block_size": 4096, 00:16:26.286 "num_blocks": 26476544, 00:16:26.286 "uuid": "9fa3cfec-24f5-4ab2-9d61-04942bab6f00", 00:16:26.286 "assigned_rate_limits": { 00:16:26.286 "rw_ios_per_sec": 0, 00:16:26.286 "rw_mbytes_per_sec": 0, 00:16:26.286 "r_mbytes_per_sec": 0, 00:16:26.286 "w_mbytes_per_sec": 0 00:16:26.286 }, 00:16:26.286 "claimed": false, 00:16:26.286 "zoned": false, 00:16:26.286 "supported_io_types": { 00:16:26.286 "read": true, 00:16:26.286 "write": true, 00:16:26.286 "unmap": true, 00:16:26.286 "write_zeroes": true, 00:16:26.286 "flush": false, 00:16:26.286 "reset": true, 00:16:26.286 "compare": false, 00:16:26.286 "compare_and_write": false, 00:16:26.286 "abort": false, 00:16:26.286 "nvme_admin": false, 00:16:26.286 "nvme_io": false 00:16:26.286 }, 00:16:26.286 "driver_specific": { 00:16:26.286 "lvol": { 00:16:26.286 "lvol_store_uuid": "3e21963b-1bb6-4fce-a624-2c6dd78fec4f", 00:16:26.286 "base_bdev": "nvme0n1", 00:16:26.286 "thin_provision": true, 00:16:26.286 "snapshot": false, 00:16:26.286 "clone": false, 00:16:26.286 "esnap_clone": false 00:16:26.286 } 00:16:26.286 } 00:16:26.286 } 00:16:26.286 ]' 00:16:26.286 15:02:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:26.286 15:02:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:26.286 15:02:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:26.286 15:02:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:26.286 15:02:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:26.286 15:02:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:26.286 15:02:49 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:26.286 15:02:49 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 --l2p_dram_limit 10' 00:16:26.286 15:02:49 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:26.286 15:02:49 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:16:26.286 15:02:49 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:26.286 15:02:49 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:26.286 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:26.286 15:02:49 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9fa3cfec-24f5-4ab2-9d61-04942bab6f00 --l2p_dram_limit 10 -c nvc0n1p0 00:16:26.546 [2024-11-18 15:02:49.881087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.881147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:26.546 [2024-11-18 15:02:49.881163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:26.546 [2024-11-18 15:02:49.881175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.881232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.881242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.546 [2024-11-18 15:02:49.881255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:26.546 [2024-11-18 15:02:49.881263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.881284] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:26.546 [2024-11-18 15:02:49.881618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:26.546 [2024-11-18 15:02:49.881640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.881649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.546 [2024-11-18 15:02:49.881660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:16:26.546 [2024-11-18 15:02:49.881668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.881700] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:16:26.546 [2024-11-18 15:02:49.883045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.883083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:26.546 [2024-11-18 15:02:49.883093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:26.546 [2024-11-18 15:02:49.883103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.889938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.889970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.546 [2024-11-18 15:02:49.889978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.765 ms 00:16:26.546 [2024-11-18 15:02:49.889988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.890064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.890075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.546 [2024-11-18 15:02:49.890081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:26.546 [2024-11-18 15:02:49.890089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.890133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.890144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:26.546 [2024-11-18 15:02:49.890153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:26.546 [2024-11-18 15:02:49.890162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.890182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.546 [2024-11-18 15:02:49.891807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.891831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.546 [2024-11-18 15:02:49.891841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:16:26.546 [2024-11-18 15:02:49.891847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.891879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.546 [2024-11-18 15:02:49.891886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:26.546 [2024-11-18 15:02:49.891899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:26.546 [2024-11-18 15:02:49.891905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.546 [2024-11-18 15:02:49.891921] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:26.546 [2024-11-18 15:02:49.892014] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:26.546 [2024-11-18 15:02:49.892026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:26.546 [2024-11-18 15:02:49.892037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:26.546 [2024-11-18 15:02:49.892055] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:26.546 [2024-11-18 15:02:49.892062] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:26.546 [2024-11-18 15:02:49.892070] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:26.546 [2024-11-18 15:02:49.892075] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:26.546 [2024-11-18 15:02:49.892083] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:26.546 [2024-11-18 15:02:49.892090] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:26.547 [2024-11-18 15:02:49.892101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.547 [2024-11-18 15:02:49.892106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:26.547 [2024-11-18 15:02:49.892114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:16:26.547 [2024-11-18 15:02:49.892120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.547 [2024-11-18 15:02:49.892179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.547 [2024-11-18 15:02:49.892186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:26.547 [2024-11-18 15:02:49.892194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:26.547 [2024-11-18 15:02:49.892200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.547 [2024-11-18 15:02:49.892260] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:26.547 [2024-11-18 15:02:49.892268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:26.547 [2024-11-18 15:02:49.892276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:26.547 [2024-11-18 15:02:49.892299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:26.547 [2024-11-18 15:02:49.892342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.547 [2024-11-18 15:02:49.892355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:26.547 [2024-11-18 15:02:49.892362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:26.547 [2024-11-18 15:02:49.892371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.547 [2024-11-18 15:02:49.892377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:26.547 [2024-11-18 15:02:49.892383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:26.547 [2024-11-18 15:02:49.892389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:26.547 [2024-11-18 15:02:49.892402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:26.547 [2024-11-18 15:02:49.892408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:26.547 [2024-11-18 15:02:49.892421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:26.547 [2024-11-18 15:02:49.892428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:26.547 [2024-11-18 15:02:49.892442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:26.547 [2024-11-18 15:02:49.892462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:26.547 [2024-11-18 15:02:49.892484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:26.547 [2024-11-18 15:02:49.892505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:26.547 [2024-11-18 15:02:49.892523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.547 [2024-11-18 15:02:49.892545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:26.547 [2024-11-18 15:02:49.892552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:26.547 [2024-11-18 15:02:49.892558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.547 [2024-11-18 15:02:49.892564] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:26.547 [2024-11-18 15:02:49.892573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:26.547 [2024-11-18 15:02:49.892581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.547 [2024-11-18 15:02:49.892597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:26.547 [2024-11-18 15:02:49.892604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:26.547 [2024-11-18 15:02:49.892611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:26.547 [2024-11-18 15:02:49.892617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:26.547 [2024-11-18 15:02:49.892624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:26.547 [2024-11-18 15:02:49.892629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:26.547 [2024-11-18 15:02:49.892639] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:26.547 [2024-11-18 15:02:49.892648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.547 [2024-11-18 15:02:49.892656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:26.547 [2024-11-18 15:02:49.892663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:26.547 [2024-11-18 15:02:49.892673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:26.547 [2024-11-18 15:02:49.892679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:26.547 [2024-11-18 15:02:49.892687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:26.547 [2024-11-18 15:02:49.892693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:26.547 [2024-11-18 15:02:49.892701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:26.547 [2024-11-18 15:02:49.892707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:26.547 [2024-11-18 15:02:49.892717] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:26.547 [2024-11-18 15:02:49.892723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:26.547 [2024-11-18 15:02:49.892732] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:26.547 [2024-11-18 15:02:49.892738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:26.547 [2024-11-18 15:02:49.892746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:26.547 [2024-11-18 15:02:49.892752] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:26.547 [2024-11-18 15:02:49.892761] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.547 [2024-11-18 15:02:49.892768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:26.547 [2024-11-18 15:02:49.892776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:26.547 [2024-11-18 15:02:49.892782] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:26.547 [2024-11-18 15:02:49.892789] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:26.547 [2024-11-18 15:02:49.892796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.547 [2024-11-18 15:02:49.892805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:26.547 [2024-11-18 15:02:49.892811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:16:26.547 [2024-11-18 15:02:49.892819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.547 [2024-11-18 15:02:49.899833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.547 [2024-11-18 15:02:49.899865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:26.547 [2024-11-18 15:02:49.899874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.981 ms 00:16:26.547 [2024-11-18 15:02:49.899882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.547 [2024-11-18 15:02:49.899952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.547 [2024-11-18 15:02:49.899962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:26.547 [2024-11-18 15:02:49.899969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:26.548 [2024-11-18 15:02:49.899977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.909981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.910011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:26.548 [2024-11-18 15:02:49.910022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.973 ms 00:16:26.548 [2024-11-18 15:02:49.910029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.910063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.910071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:26.548 [2024-11-18 15:02:49.910081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:26.548 [2024-11-18 15:02:49.910088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.910497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.910516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:26.548 [2024-11-18 15:02:49.910529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:16:26.548 [2024-11-18 15:02:49.910537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.910633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.910644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:26.548 [2024-11-18 15:02:49.910651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:26.548 [2024-11-18 15:02:49.910659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.916971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.917153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:26.548 [2024-11-18 15:02:49.917167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:16:26.548 [2024-11-18 15:02:49.917174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.924560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:26.548 [2024-11-18 15:02:49.927403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.927510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:26.548 [2024-11-18 15:02:49.927525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.179 ms 00:16:26.548 [2024-11-18 15:02:49.927531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.986189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.548 [2024-11-18 15:02:49.986238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:26.548 [2024-11-18 15:02:49.986255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.629 ms 00:16:26.548 [2024-11-18 15:02:49.986268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.548 [2024-11-18 15:02:49.986308] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:26.548 [2024-11-18 15:02:49.986337] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:29.083 [2024-11-18 15:02:52.507779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.507857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:29.083 [2024-11-18 15:02:52.507883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2521.447 ms 00:16:29.083 [2024-11-18 15:02:52.507892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.508097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.508110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:29.083 [2024-11-18 15:02:52.508121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:16:29.083 [2024-11-18 15:02:52.508129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.511396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.511434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:29.083 [2024-11-18 15:02:52.511450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:16:29.083 [2024-11-18 15:02:52.511459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.514241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.514274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:29.083 [2024-11-18 15:02:52.514286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:16:29.083 [2024-11-18 15:02:52.514294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.514487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.514498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:29.083 [2024-11-18 15:02:52.514508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:29.083 [2024-11-18 15:02:52.514515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.537239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.537279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:29.083 [2024-11-18 15:02:52.537292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.701 ms 00:16:29.083 [2024-11-18 15:02:52.537301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.541738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.541772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:29.083 [2024-11-18 15:02:52.541788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:16:29.083 [2024-11-18 15:02:52.541797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.543169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.543341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:29.083 [2024-11-18 15:02:52.543361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:16:29.083 [2024-11-18 15:02:52.543371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.546985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.547117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:29.083 [2024-11-18 15:02:52.547136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:16:29.083 [2024-11-18 15:02:52.547145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.547178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.547190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:29.083 [2024-11-18 15:02:52.547201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:29.083 [2024-11-18 15:02:52.547209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.547280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.083 [2024-11-18 15:02:52.547290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:29.083 [2024-11-18 15:02:52.547301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:29.083 [2024-11-18 15:02:52.547309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.083 [2024-11-18 15:02:52.548240] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2666.741 ms, result 0 00:16:29.083 { 00:16:29.083 "name": "ftl0", 00:16:29.083 "uuid": "2ffa0f95-3a55-45bb-8c34-aba5a0743530" 00:16:29.083 } 00:16:29.083 15:02:52 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:16:29.084 15:02:52 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:29.341 15:02:52 -- ftl/restore.sh@63 -- # echo ']}' 00:16:29.341 15:02:52 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:29.341 [2024-11-18 15:02:52.926554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.341 [2024-11-18 15:02:52.926593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:29.341 [2024-11-18 15:02:52.926605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:29.341 [2024-11-18 15:02:52.926615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.341 [2024-11-18 15:02:52.926650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:29.341 [2024-11-18 15:02:52.927199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.341 [2024-11-18 15:02:52.927216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:29.341 [2024-11-18 15:02:52.927228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:16:29.341 [2024-11-18 15:02:52.927236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.341 [2024-11-18 15:02:52.927514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.341 [2024-11-18 15:02:52.927527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:29.341 [2024-11-18 15:02:52.927539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:16:29.341 [2024-11-18 15:02:52.927547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.930809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.930831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:29.616 [2024-11-18 15:02:52.930845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:16:29.616 [2024-11-18 15:02:52.930853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.936983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.937010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:29.616 [2024-11-18 15:02:52.937023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:16:29.616 [2024-11-18 15:02:52.937030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.938591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.938621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:29.616 [2024-11-18 15:02:52.938639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:16:29.616 [2024-11-18 15:02:52.938647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.943153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.943187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:29.616 [2024-11-18 15:02:52.943198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.471 ms 00:16:29.616 [2024-11-18 15:02:52.943206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.943344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.943355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:29.616 [2024-11-18 15:02:52.943366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:29.616 [2024-11-18 15:02:52.943373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.945114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.945142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:29.616 [2024-11-18 15:02:52.945152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:16:29.616 [2024-11-18 15:02:52.945159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.946605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.946641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:29.616 [2024-11-18 15:02:52.946652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:16:29.616 [2024-11-18 15:02:52.946659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.616 [2024-11-18 15:02:52.947850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.616 [2024-11-18 15:02:52.947974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:29.617 [2024-11-18 15:02:52.947992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:16:29.617 [2024-11-18 15:02:52.948000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.617 [2024-11-18 15:02:52.949154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.617 [2024-11-18 15:02:52.949178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:29.617 [2024-11-18 15:02:52.949188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:16:29.617 [2024-11-18 15:02:52.949196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.617 [2024-11-18 15:02:52.949226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:29.617 [2024-11-18 15:02:52.949240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:29.617 [2024-11-18 15:02:52.949973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.949980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.949990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.949997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:29.618 [2024-11-18 15:02:52.950144] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:29.618 [2024-11-18 15:02:52.950155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:16:29.618 [2024-11-18 15:02:52.950163] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:29.618 [2024-11-18 15:02:52.950172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:29.618 [2024-11-18 15:02:52.950180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:29.618 [2024-11-18 15:02:52.950189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:29.618 [2024-11-18 15:02:52.950196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:29.618 [2024-11-18 15:02:52.950207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:29.618 [2024-11-18 15:02:52.950215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:29.618 [2024-11-18 15:02:52.950224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:29.618 [2024-11-18 15:02:52.950230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:29.618 [2024-11-18 15:02:52.950240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.618 [2024-11-18 15:02:52.950248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:29.618 [2024-11-18 15:02:52.950257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:16:29.618 [2024-11-18 15:02:52.950267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.952334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.618 [2024-11-18 15:02:52.952421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:29.618 [2024-11-18 15:02:52.952475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:16:29.618 [2024-11-18 15:02:52.952498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.952575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.618 [2024-11-18 15:02:52.952650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:29.618 [2024-11-18 15:02:52.952679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:29.618 [2024-11-18 15:02:52.952700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.959085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.959190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.618 [2024-11-18 15:02:52.959245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.959268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.959386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.959412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.618 [2024-11-18 15:02:52.959467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.959490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.959573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.959619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.618 [2024-11-18 15:02:52.959642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.959702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.959739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.959798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.618 [2024-11-18 15:02:52.959824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.959868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.971576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.971722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.618 [2024-11-18 15:02:52.971776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.971799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.976364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.976472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.618 [2024-11-18 15:02:52.976529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.976551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.976660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.976688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.618 [2024-11-18 15:02:52.976778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.976800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.976856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.976879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.618 [2024-11-18 15:02:52.976932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.976952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.977072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.977477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.618 [2024-11-18 15:02:52.977771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.977954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.978293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.978522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:29.618 [2024-11-18 15:02:52.978768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.978930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.979183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.979400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.618 [2024-11-18 15:02:52.979564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.979724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.979974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.618 [2024-11-18 15:02:52.980153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.618 [2024-11-18 15:02:52.980356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.618 [2024-11-18 15:02:52.980506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.618 [2024-11-18 15:02:52.981055] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.352 ms, result 0 00:16:29.618 true 00:16:29.618 15:02:52 -- ftl/restore.sh@66 -- # killprocess 83794 00:16:29.618 15:02:52 -- common/autotest_common.sh@936 -- # '[' -z 83794 ']' 00:16:29.618 15:02:52 -- common/autotest_common.sh@940 -- # kill -0 83794 00:16:29.618 15:02:52 -- common/autotest_common.sh@941 -- # uname 00:16:29.618 15:02:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.618 15:02:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83794 00:16:29.618 killing process with pid 83794 00:16:29.618 15:02:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:29.618 15:02:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:29.618 15:02:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83794' 00:16:29.618 15:02:53 -- common/autotest_common.sh@955 -- # kill 83794 00:16:29.618 15:02:53 -- common/autotest_common.sh@960 -- # wait 83794 00:16:34.969 15:02:57 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:16:38.252 262144+0 records in 00:16:38.252 262144+0 records out 00:16:38.252 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.59896 s, 298 MB/s 00:16:38.252 15:03:01 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:16:39.634 15:03:02 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:39.634 [2024-11-18 15:03:02.946706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.634 [2024-11-18 15:03:02.946810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83996 ] 00:16:39.634 [2024-11-18 15:03:03.087996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.634 [2024-11-18 15:03:03.127097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.896 [2024-11-18 15:03:03.223332] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.896 [2024-11-18 15:03:03.223409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:39.896 [2024-11-18 15:03:03.368261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.368312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:39.896 [2024-11-18 15:03:03.368339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.896 [2024-11-18 15:03:03.368346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.368391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.368398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.896 [2024-11-18 15:03:03.368405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:39.896 [2024-11-18 15:03:03.368413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.368431] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:39.896 [2024-11-18 15:03:03.368646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:39.896 [2024-11-18 15:03:03.368657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.368670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.896 [2024-11-18 15:03:03.368676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:16:39.896 [2024-11-18 15:03:03.368682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.369902] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:39.896 [2024-11-18 15:03:03.372442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.372471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:39.896 [2024-11-18 15:03:03.372484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:16:39.896 [2024-11-18 15:03:03.372491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.372586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.372595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:39.896 [2024-11-18 15:03:03.372602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:39.896 [2024-11-18 15:03:03.372608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.378670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.378700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.896 [2024-11-18 15:03:03.378708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.013 ms 00:16:39.896 [2024-11-18 15:03:03.378714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.378776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.378784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.896 [2024-11-18 15:03:03.378794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:39.896 [2024-11-18 15:03:03.378801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.378833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.378844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:39.896 [2024-11-18 15:03:03.378852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:39.896 [2024-11-18 15:03:03.378860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.378878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.896 [2024-11-18 15:03:03.380417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.380443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.896 [2024-11-18 15:03:03.380454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:16:39.896 [2024-11-18 15:03:03.380460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.380495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.380502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:39.896 [2024-11-18 15:03:03.380511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:39.896 [2024-11-18 15:03:03.380517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.380532] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:39.896 [2024-11-18 15:03:03.380548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:39.896 [2024-11-18 15:03:03.380579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:39.896 [2024-11-18 15:03:03.380591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:39.896 [2024-11-18 15:03:03.380651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:39.896 [2024-11-18 15:03:03.380662] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:39.896 [2024-11-18 15:03:03.380674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:39.896 [2024-11-18 15:03:03.380682] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:39.896 [2024-11-18 15:03:03.380688] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:39.896 [2024-11-18 15:03:03.380696] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:39.896 [2024-11-18 15:03:03.380701] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:39.896 [2024-11-18 15:03:03.380707] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:39.896 [2024-11-18 15:03:03.380714] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:39.896 [2024-11-18 15:03:03.380719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.380725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:39.896 [2024-11-18 15:03:03.380731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:39.896 [2024-11-18 15:03:03.380739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.380785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.896 [2024-11-18 15:03:03.380794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:39.896 [2024-11-18 15:03:03.380800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:39.896 [2024-11-18 15:03:03.380805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.896 [2024-11-18 15:03:03.380858] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:39.896 [2024-11-18 15:03:03.380866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:39.896 [2024-11-18 15:03:03.380874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.896 [2024-11-18 15:03:03.380886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.896 [2024-11-18 15:03:03.380894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:39.896 [2024-11-18 15:03:03.380899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:39.896 [2024-11-18 15:03:03.380904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:39.896 [2024-11-18 15:03:03.380911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:39.896 [2024-11-18 15:03:03.380917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:39.896 [2024-11-18 15:03:03.380923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.896 [2024-11-18 15:03:03.380928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:39.896 [2024-11-18 15:03:03.380933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:39.896 [2024-11-18 15:03:03.380938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.896 [2024-11-18 15:03:03.380947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:39.896 [2024-11-18 15:03:03.380953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:39.896 [2024-11-18 15:03:03.380958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.896 [2024-11-18 15:03:03.380963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:39.896 [2024-11-18 15:03:03.380969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:39.897 [2024-11-18 15:03:03.380974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.897 [2024-11-18 15:03:03.380981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:39.897 [2024-11-18 15:03:03.380987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:39.897 [2024-11-18 15:03:03.380992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:39.897 [2024-11-18 15:03:03.380998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:39.897 [2024-11-18 15:03:03.381003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:39.897 [2024-11-18 15:03:03.381017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:39.897 [2024-11-18 15:03:03.381033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:39.897 [2024-11-18 15:03:03.381047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:39.897 [2024-11-18 15:03:03.381065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.897 [2024-11-18 15:03:03.381077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:39.897 [2024-11-18 15:03:03.381082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:39.897 [2024-11-18 15:03:03.381088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.897 [2024-11-18 15:03:03.381093] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:39.897 [2024-11-18 15:03:03.381100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:39.897 [2024-11-18 15:03:03.381107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.897 [2024-11-18 15:03:03.381121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:39.897 [2024-11-18 15:03:03.381126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:39.897 [2024-11-18 15:03:03.381132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:39.897 [2024-11-18 15:03:03.381138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:39.897 [2024-11-18 15:03:03.381144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:39.897 [2024-11-18 15:03:03.381151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:39.897 [2024-11-18 15:03:03.381158] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:39.897 [2024-11-18 15:03:03.381167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.897 [2024-11-18 15:03:03.381175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:39.897 [2024-11-18 15:03:03.381182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:39.897 [2024-11-18 15:03:03.381188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:39.897 [2024-11-18 15:03:03.381195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:39.897 [2024-11-18 15:03:03.381201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:39.897 [2024-11-18 15:03:03.381207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:39.897 [2024-11-18 15:03:03.381214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:39.897 [2024-11-18 15:03:03.381220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:39.897 [2024-11-18 15:03:03.381226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:39.897 [2024-11-18 15:03:03.381232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:39.897 [2024-11-18 15:03:03.381238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:39.897 [2024-11-18 15:03:03.381244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:39.897 [2024-11-18 15:03:03.381252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:39.897 [2024-11-18 15:03:03.381258] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:39.897 [2024-11-18 15:03:03.381265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.897 [2024-11-18 15:03:03.381273] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:39.897 [2024-11-18 15:03:03.381280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:39.897 [2024-11-18 15:03:03.381286] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:39.897 [2024-11-18 15:03:03.381293] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:39.897 [2024-11-18 15:03:03.381300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.381307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:39.897 [2024-11-18 15:03:03.381313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:16:39.897 [2024-11-18 15:03:03.381332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.388524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.388551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.897 [2024-11-18 15:03:03.388560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.156 ms 00:16:39.897 [2024-11-18 15:03:03.388567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.388641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.388653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.897 [2024-11-18 15:03:03.388661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:39.897 [2024-11-18 15:03:03.388668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.406193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.406248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.897 [2024-11-18 15:03:03.406264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.485 ms 00:16:39.897 [2024-11-18 15:03:03.406275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.406351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.406366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.897 [2024-11-18 15:03:03.406382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.897 [2024-11-18 15:03:03.406396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.406925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.897 [2024-11-18 15:03:03.406946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.897 [2024-11-18 15:03:03.406960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:16:39.897 [2024-11-18 15:03:03.406972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.897 [2024-11-18 15:03:03.407138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.407158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.898 [2024-11-18 15:03:03.407169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:39.898 [2024-11-18 15:03:03.407178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.414771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.414808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.898 [2024-11-18 15:03:03.414821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.562 ms 00:16:39.898 [2024-11-18 15:03:03.414840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.417432] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:39.898 [2024-11-18 15:03:03.417573] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:39.898 [2024-11-18 15:03:03.417594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.417600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:39.898 [2024-11-18 15:03:03.417608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:16:39.898 [2024-11-18 15:03:03.417614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.429612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.429647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:39.898 [2024-11-18 15:03:03.429660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.919 ms 00:16:39.898 [2024-11-18 15:03:03.429666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.431153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.431180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:39.898 [2024-11-18 15:03:03.431187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:16:39.898 [2024-11-18 15:03:03.431193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.432802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.432903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:39.898 [2024-11-18 15:03:03.432914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:16:39.898 [2024-11-18 15:03:03.432925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.433083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.433093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.898 [2024-11-18 15:03:03.433100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:39.898 [2024-11-18 15:03:03.433106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.451604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.451730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:39.898 [2024-11-18 15:03:03.451745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:16:39.898 [2024-11-18 15:03:03.451756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.457682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:39.898 [2024-11-18 15:03:03.459945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.459974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.898 [2024-11-18 15:03:03.459983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.159 ms 00:16:39.898 [2024-11-18 15:03:03.459990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.460042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.460050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:39.898 [2024-11-18 15:03:03.460057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.898 [2024-11-18 15:03:03.460063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.460109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.460123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.898 [2024-11-18 15:03:03.460135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:39.898 [2024-11-18 15:03:03.460141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.461237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.461265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:39.898 [2024-11-18 15:03:03.461272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:16:39.898 [2024-11-18 15:03:03.461278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.461303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.461310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.898 [2024-11-18 15:03:03.461329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.898 [2024-11-18 15:03:03.461338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.461370] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:39.898 [2024-11-18 15:03:03.461377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.461384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:39.898 [2024-11-18 15:03:03.461393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:39.898 [2024-11-18 15:03:03.461399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.464844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.464873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.898 [2024-11-18 15:03:03.464881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:16:39.898 [2024-11-18 15:03:03.464889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.464948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.898 [2024-11-18 15:03:03.464958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.898 [2024-11-18 15:03:03.464964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:39.898 [2024-11-18 15:03:03.464974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.898 [2024-11-18 15:03:03.465882] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.242 ms, result 0 00:16:41.272  [2024-11-18T15:03:05.804Z] Copying: 51/1024 [MB] (51 MBps) [2024-11-18T15:03:06.738Z] Copying: 99/1024 [MB] (48 MBps) [2024-11-18T15:03:07.670Z] Copying: 144/1024 [MB] (45 MBps) [2024-11-18T15:03:08.605Z] Copying: 190/1024 [MB] (45 MBps) [2024-11-18T15:03:09.540Z] Copying: 230/1024 [MB] (39 MBps) [2024-11-18T15:03:10.915Z] Copying: 273/1024 [MB] (43 MBps) [2024-11-18T15:03:11.482Z] Copying: 323/1024 [MB] (49 MBps) [2024-11-18T15:03:12.858Z] Copying: 367/1024 [MB] (44 MBps) [2024-11-18T15:03:13.792Z] Copying: 412/1024 [MB] (45 MBps) [2024-11-18T15:03:14.726Z] Copying: 456/1024 [MB] (44 MBps) [2024-11-18T15:03:15.661Z] Copying: 503/1024 [MB] (46 MBps) [2024-11-18T15:03:16.596Z] Copying: 549/1024 [MB] (45 MBps) [2024-11-18T15:03:17.531Z] Copying: 594/1024 [MB] (45 MBps) [2024-11-18T15:03:18.906Z] Copying: 643/1024 [MB] (49 MBps) [2024-11-18T15:03:19.836Z] Copying: 688/1024 [MB] (44 MBps) [2024-11-18T15:03:20.769Z] Copying: 733/1024 [MB] (44 MBps) [2024-11-18T15:03:21.703Z] Copying: 781/1024 [MB] (48 MBps) [2024-11-18T15:03:22.668Z] Copying: 828/1024 [MB] (46 MBps) [2024-11-18T15:03:23.684Z] Copying: 876/1024 [MB] (48 MBps) [2024-11-18T15:03:24.617Z] Copying: 923/1024 [MB] (46 MBps) [2024-11-18T15:03:25.549Z] Copying: 974/1024 [MB] (51 MBps) [2024-11-18T15:03:25.811Z] Copying: 1018/1024 [MB] (43 MBps) [2024-11-18T15:03:25.811Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-11-18 15:03:25.590482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.590541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.221 [2024-11-18 15:03:25.590555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:02.221 [2024-11-18 15:03:25.590564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.590584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.221 [2024-11-18 15:03:25.591143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.591338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.221 [2024-11-18 15:03:25.591356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:02.221 [2024-11-18 15:03:25.591373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.592767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.592794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.221 [2024-11-18 15:03:25.592804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:17:02.221 [2024-11-18 15:03:25.592812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.604766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.604796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.221 [2024-11-18 15:03:25.604811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.931 ms 00:17:02.221 [2024-11-18 15:03:25.604824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.610943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.610967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:02.221 [2024-11-18 15:03:25.610977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:17:02.221 [2024-11-18 15:03:25.610985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.612177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.612205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.221 [2024-11-18 15:03:25.612214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:17:02.221 [2024-11-18 15:03:25.612221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.615910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.615936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.221 [2024-11-18 15:03:25.615951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:17:02.221 [2024-11-18 15:03:25.615958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.616067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.616078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.221 [2024-11-18 15:03:25.616091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:02.221 [2024-11-18 15:03:25.616099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.617804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.617842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:02.221 [2024-11-18 15:03:25.617851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:17:02.221 [2024-11-18 15:03:25.617857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.620196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.620291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:02.221 [2024-11-18 15:03:25.620344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:17:02.221 [2024-11-18 15:03:25.620366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.622179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.622265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.221 [2024-11-18 15:03:25.622288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:17:02.221 [2024-11-18 15:03:25.622306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.623703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.221 [2024-11-18 15:03:25.623735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.221 [2024-11-18 15:03:25.623746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:17:02.221 [2024-11-18 15:03:25.623756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.221 [2024-11-18 15:03:25.623792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.221 [2024-11-18 15:03:25.623813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.623996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.624006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.624016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.624026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.624037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.221 [2024-11-18 15:03:25.624048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.222 [2024-11-18 15:03:25.624838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.223 [2024-11-18 15:03:25.624848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.223 [2024-11-18 15:03:25.624859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.223 [2024-11-18 15:03:25.624869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.223 [2024-11-18 15:03:25.624880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.223 [2024-11-18 15:03:25.624903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.223 [2024-11-18 15:03:25.624920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:17:02.223 [2024-11-18 15:03:25.624931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.223 [2024-11-18 15:03:25.624941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.223 [2024-11-18 15:03:25.624951] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.223 [2024-11-18 15:03:25.624962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.223 [2024-11-18 15:03:25.624978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.223 [2024-11-18 15:03:25.624993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.223 [2024-11-18 15:03:25.625003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.223 [2024-11-18 15:03:25.625012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.223 [2024-11-18 15:03:25.625021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.223 [2024-11-18 15:03:25.625031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.223 [2024-11-18 15:03:25.625042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.223 [2024-11-18 15:03:25.625053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:17:02.223 [2024-11-18 15:03:25.625063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.627120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.223 [2024-11-18 15:03:25.627144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.223 [2024-11-18 15:03:25.627159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:17:02.223 [2024-11-18 15:03:25.627171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.627243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.223 [2024-11-18 15:03:25.627264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.223 [2024-11-18 15:03:25.627277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:02.223 [2024-11-18 15:03:25.627291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.634448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.634490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.223 [2024-11-18 15:03:25.634504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.634515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.634582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.634594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.223 [2024-11-18 15:03:25.634614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.634628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.634742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.634758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.223 [2024-11-18 15:03:25.634770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.634780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.634807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.634818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.223 [2024-11-18 15:03:25.634829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.634839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.645709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.645743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.223 [2024-11-18 15:03:25.645753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.645761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.223 [2024-11-18 15:03:25.650419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.223 [2024-11-18 15:03:25.650517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.223 [2024-11-18 15:03:25.650573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.223 [2024-11-18 15:03:25.650679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.223 [2024-11-18 15:03:25.650731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.223 [2024-11-18 15:03:25.650801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.223 [2024-11-18 15:03:25.650863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.223 [2024-11-18 15:03:25.650871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.223 [2024-11-18 15:03:25.650879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.223 [2024-11-18 15:03:25.650998] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.485 ms, result 0 00:17:02.792 00:17:02.792 00:17:02.792 15:03:26 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:02.792 [2024-11-18 15:03:26.332756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:02.792 [2024-11-18 15:03:26.332868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84247 ] 00:17:03.051 [2024-11-18 15:03:26.479836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.051 [2024-11-18 15:03:26.527859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.051 [2024-11-18 15:03:26.627449] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.051 [2024-11-18 15:03:26.627522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.312 [2024-11-18 15:03:26.774073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.774116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.312 [2024-11-18 15:03:26.774129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.312 [2024-11-18 15:03:26.774137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.774182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.774192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.312 [2024-11-18 15:03:26.774200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:03.312 [2024-11-18 15:03:26.774210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.774226] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.312 [2024-11-18 15:03:26.774467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.312 [2024-11-18 15:03:26.774486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.774496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.312 [2024-11-18 15:03:26.774504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:03.312 [2024-11-18 15:03:26.774511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.775838] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:03.312 [2024-11-18 15:03:26.778485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.778516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:03.312 [2024-11-18 15:03:26.778531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.650 ms 00:17:03.312 [2024-11-18 15:03:26.778539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.778589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.778603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:03.312 [2024-11-18 15:03:26.778611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:03.312 [2024-11-18 15:03:26.778618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.785026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.785059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.312 [2024-11-18 15:03:26.785068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.333 ms 00:17:03.312 [2024-11-18 15:03:26.785078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.785144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.785160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.312 [2024-11-18 15:03:26.785171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:03.312 [2024-11-18 15:03:26.785178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.785215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.785224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.312 [2024-11-18 15:03:26.785235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:03.312 [2024-11-18 15:03:26.785243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.785266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.312 [2024-11-18 15:03:26.786934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.786962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.312 [2024-11-18 15:03:26.786971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:17:03.312 [2024-11-18 15:03:26.786979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.787010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.787019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.312 [2024-11-18 15:03:26.787030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:03.312 [2024-11-18 15:03:26.787037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.787057] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:03.312 [2024-11-18 15:03:26.787076] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:03.312 [2024-11-18 15:03:26.787115] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:03.312 [2024-11-18 15:03:26.787132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:03.312 [2024-11-18 15:03:26.787213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:03.312 [2024-11-18 15:03:26.787226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.312 [2024-11-18 15:03:26.787239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:03.312 [2024-11-18 15:03:26.787249] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787258] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787268] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:03.312 [2024-11-18 15:03:26.787276] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.312 [2024-11-18 15:03:26.787283] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:03.312 [2024-11-18 15:03:26.787290] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:03.312 [2024-11-18 15:03:26.787297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.787304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.312 [2024-11-18 15:03:26.787329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:03.312 [2024-11-18 15:03:26.787339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.787399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.312 [2024-11-18 15:03:26.787408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.312 [2024-11-18 15:03:26.787415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:03.312 [2024-11-18 15:03:26.787422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.312 [2024-11-18 15:03:26.787500] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.312 [2024-11-18 15:03:26.787511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.312 [2024-11-18 15:03:26.787519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.312 [2024-11-18 15:03:26.787548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.312 [2024-11-18 15:03:26.787570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.312 [2024-11-18 15:03:26.787585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.312 [2024-11-18 15:03:26.787592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:03.312 [2024-11-18 15:03:26.787600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.312 [2024-11-18 15:03:26.787613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.312 [2024-11-18 15:03:26.787621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:03.312 [2024-11-18 15:03:26.787628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.312 [2024-11-18 15:03:26.787643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:03.312 [2024-11-18 15:03:26.787651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:03.312 [2024-11-18 15:03:26.787667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:03.312 [2024-11-18 15:03:26.787677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.312 [2024-11-18 15:03:26.787693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.312 [2024-11-18 15:03:26.787709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.312 [2024-11-18 15:03:26.787716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:03.312 [2024-11-18 15:03:26.787723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.313 [2024-11-18 15:03:26.787732] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.313 [2024-11-18 15:03:26.787740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:03.313 [2024-11-18 15:03:26.787747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.313 [2024-11-18 15:03:26.787755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.313 [2024-11-18 15:03:26.787762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:03.313 [2024-11-18 15:03:26.787771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.313 [2024-11-18 15:03:26.787779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.313 [2024-11-18 15:03:26.787789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:03.313 [2024-11-18 15:03:26.787796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.313 [2024-11-18 15:03:26.787804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.313 [2024-11-18 15:03:26.787812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:03.313 [2024-11-18 15:03:26.787819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.313 [2024-11-18 15:03:26.787826] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.313 [2024-11-18 15:03:26.787834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.313 [2024-11-18 15:03:26.787843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.313 [2024-11-18 15:03:26.787851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.313 [2024-11-18 15:03:26.787859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.313 [2024-11-18 15:03:26.787867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.313 [2024-11-18 15:03:26.787875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.313 [2024-11-18 15:03:26.787882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.313 [2024-11-18 15:03:26.787890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.313 [2024-11-18 15:03:26.787897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.313 [2024-11-18 15:03:26.787906] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.313 [2024-11-18 15:03:26.787918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.313 [2024-11-18 15:03:26.787927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:03.313 [2024-11-18 15:03:26.787936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:03.313 [2024-11-18 15:03:26.787944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:03.313 [2024-11-18 15:03:26.787955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:03.313 [2024-11-18 15:03:26.787963] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:03.313 [2024-11-18 15:03:26.787971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:03.313 [2024-11-18 15:03:26.787978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:03.313 [2024-11-18 15:03:26.787985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:03.313 [2024-11-18 15:03:26.787992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:03.313 [2024-11-18 15:03:26.787999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:03.313 [2024-11-18 15:03:26.788006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:03.313 [2024-11-18 15:03:26.788014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:03.313 [2024-11-18 15:03:26.788021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:03.313 [2024-11-18 15:03:26.788028] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.313 [2024-11-18 15:03:26.788042] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.313 [2024-11-18 15:03:26.788051] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.313 [2024-11-18 15:03:26.788059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.313 [2024-11-18 15:03:26.788066] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.313 [2024-11-18 15:03:26.788073] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.313 [2024-11-18 15:03:26.788081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.788088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.313 [2024-11-18 15:03:26.788095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:17:03.313 [2024-11-18 15:03:26.788104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.795841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.795869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.313 [2024-11-18 15:03:26.795878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.698 ms 00:17:03.313 [2024-11-18 15:03:26.795886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.795971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.795980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:03.313 [2024-11-18 15:03:26.795994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:03.313 [2024-11-18 15:03:26.796003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.815641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.815684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.313 [2024-11-18 15:03:26.815698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.591 ms 00:17:03.313 [2024-11-18 15:03:26.815709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.815758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.815778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.313 [2024-11-18 15:03:26.815799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.313 [2024-11-18 15:03:26.815808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.816275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.816304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.313 [2024-11-18 15:03:26.816333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:03.313 [2024-11-18 15:03:26.816344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.816499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.816512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.313 [2024-11-18 15:03:26.816524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:03.313 [2024-11-18 15:03:26.816538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.823574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.823603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.313 [2024-11-18 15:03:26.823612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:17:03.313 [2024-11-18 15:03:26.823620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.826388] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:03.313 [2024-11-18 15:03:26.826420] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:03.313 [2024-11-18 15:03:26.826430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.826439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:03.313 [2024-11-18 15:03:26.826448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:17:03.313 [2024-11-18 15:03:26.826455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.841205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.841244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:03.313 [2024-11-18 15:03:26.841255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.712 ms 00:17:03.313 [2024-11-18 15:03:26.841263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.842924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.842951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:03.313 [2024-11-18 15:03:26.842959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:17:03.313 [2024-11-18 15:03:26.842967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.844476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.844501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:03.313 [2024-11-18 15:03:26.844510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:17:03.313 [2024-11-18 15:03:26.844522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.844711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.844729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.313 [2024-11-18 15:03:26.844741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:03.313 [2024-11-18 15:03:26.844748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.865643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.313 [2024-11-18 15:03:26.865685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:03.313 [2024-11-18 15:03:26.865702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.873 ms 00:17:03.313 [2024-11-18 15:03:26.865714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.313 [2024-11-18 15:03:26.873257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:03.314 [2024-11-18 15:03:26.875901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.875926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.314 [2024-11-18 15:03:26.875938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.141 ms 00:17:03.314 [2024-11-18 15:03:26.875953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.876020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.876031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:03.314 [2024-11-18 15:03:26.876043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.314 [2024-11-18 15:03:26.876051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.876119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.876137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.314 [2024-11-18 15:03:26.876146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:03.314 [2024-11-18 15:03:26.876153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.877525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.877552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:03.314 [2024-11-18 15:03:26.877561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:17:03.314 [2024-11-18 15:03:26.877569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.877597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.877616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.314 [2024-11-18 15:03:26.877624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.314 [2024-11-18 15:03:26.877631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.877669] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:03.314 [2024-11-18 15:03:26.877683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.877691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:03.314 [2024-11-18 15:03:26.877700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:03.314 [2024-11-18 15:03:26.877709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.881233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.881263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.314 [2024-11-18 15:03:26.881279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:17:03.314 [2024-11-18 15:03:26.881287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.881387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.314 [2024-11-18 15:03:26.881402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.314 [2024-11-18 15:03:26.881411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:03.314 [2024-11-18 15:03:26.881419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.314 [2024-11-18 15:03:26.882420] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.908 ms, result 0 00:17:04.690  [2024-11-18T15:03:29.213Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-18T15:03:30.169Z] Copying: 92/1024 [MB] (45 MBps) [2024-11-18T15:03:31.102Z] Copying: 140/1024 [MB] (48 MBps) [2024-11-18T15:03:32.473Z] Copying: 188/1024 [MB] (48 MBps) [2024-11-18T15:03:33.406Z] Copying: 236/1024 [MB] (48 MBps) [2024-11-18T15:03:34.342Z] Copying: 286/1024 [MB] (49 MBps) [2024-11-18T15:03:35.278Z] Copying: 339/1024 [MB] (53 MBps) [2024-11-18T15:03:36.213Z] Copying: 394/1024 [MB] (54 MBps) [2024-11-18T15:03:37.146Z] Copying: 442/1024 [MB] (47 MBps) [2024-11-18T15:03:38.080Z] Copying: 489/1024 [MB] (47 MBps) [2024-11-18T15:03:39.455Z] Copying: 540/1024 [MB] (50 MBps) [2024-11-18T15:03:40.388Z] Copying: 593/1024 [MB] (53 MBps) [2024-11-18T15:03:41.326Z] Copying: 645/1024 [MB] (52 MBps) [2024-11-18T15:03:42.263Z] Copying: 689/1024 [MB] (43 MBps) [2024-11-18T15:03:43.202Z] Copying: 732/1024 [MB] (42 MBps) [2024-11-18T15:03:44.158Z] Copying: 776/1024 [MB] (44 MBps) [2024-11-18T15:03:45.097Z] Copying: 832/1024 [MB] (55 MBps) [2024-11-18T15:03:46.480Z] Copying: 882/1024 [MB] (49 MBps) [2024-11-18T15:03:47.418Z] Copying: 927/1024 [MB] (45 MBps) [2024-11-18T15:03:48.362Z] Copying: 973/1024 [MB] (46 MBps) [2024-11-18T15:03:48.362Z] Copying: 1015/1024 [MB] (42 MBps) [2024-11-18T15:03:48.362Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-18 15:03:48.312203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.772 [2024-11-18 15:03:48.312258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.772 [2024-11-18 15:03:48.312273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.772 [2024-11-18 15:03:48.312281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.772 [2024-11-18 15:03:48.312306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.772 [2024-11-18 15:03:48.312857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.772 [2024-11-18 15:03:48.312875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.772 [2024-11-18 15:03:48.312884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:17:24.772 [2024-11-18 15:03:48.312892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.772 [2024-11-18 15:03:48.313106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.772 [2024-11-18 15:03:48.313116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.772 [2024-11-18 15:03:48.313126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:24.772 [2024-11-18 15:03:48.313134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.772 [2024-11-18 15:03:48.316590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.316614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.773 [2024-11-18 15:03:48.316624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:17:24.773 [2024-11-18 15:03:48.316633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.323414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.323447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:24.773 [2024-11-18 15:03:48.323460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.767 ms 00:17:24.773 [2024-11-18 15:03:48.323468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.324891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.324921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.773 [2024-11-18 15:03:48.324929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:17:24.773 [2024-11-18 15:03:48.324936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.328429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.328456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.773 [2024-11-18 15:03:48.328475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.464 ms 00:17:24.773 [2024-11-18 15:03:48.328487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.328595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.328606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.773 [2024-11-18 15:03:48.328619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:24.773 [2024-11-18 15:03:48.328627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.330900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.330926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:24.773 [2024-11-18 15:03:48.330934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:17:24.773 [2024-11-18 15:03:48.330941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.332471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.332507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:24.773 [2024-11-18 15:03:48.332517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:17:24.773 [2024-11-18 15:03:48.332525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.333387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.333413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.773 [2024-11-18 15:03:48.333421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:17:24.773 [2024-11-18 15:03:48.333428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.334539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.773 [2024-11-18 15:03:48.334566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.773 [2024-11-18 15:03:48.334575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:17:24.773 [2024-11-18 15:03:48.334582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.773 [2024-11-18 15:03:48.334609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.773 [2024-11-18 15:03:48.334630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.773 [2024-11-18 15:03:48.334995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.774 [2024-11-18 15:03:48.335404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.774 [2024-11-18 15:03:48.335411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:17:24.774 [2024-11-18 15:03:48.335419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.774 [2024-11-18 15:03:48.335427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.774 [2024-11-18 15:03:48.335433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.774 [2024-11-18 15:03:48.335441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.774 [2024-11-18 15:03:48.335448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.774 [2024-11-18 15:03:48.335456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.774 [2024-11-18 15:03:48.335463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.774 [2024-11-18 15:03:48.335469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.774 [2024-11-18 15:03:48.335475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.774 [2024-11-18 15:03:48.335482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.774 [2024-11-18 15:03:48.335495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.774 [2024-11-18 15:03:48.335505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:17:24.774 [2024-11-18 15:03:48.335512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.774 [2024-11-18 15:03:48.337261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.774 [2024-11-18 15:03:48.337283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.774 [2024-11-18 15:03:48.337293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:17:24.774 [2024-11-18 15:03:48.337300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.774 [2024-11-18 15:03:48.337388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.774 [2024-11-18 15:03:48.337401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.774 [2024-11-18 15:03:48.337409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:24.774 [2024-11-18 15:03:48.337416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.774 [2024-11-18 15:03:48.343634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.774 [2024-11-18 15:03:48.343669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.774 [2024-11-18 15:03:48.343678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.774 [2024-11-18 15:03:48.343694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.775 [2024-11-18 15:03:48.343748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.775 [2024-11-18 15:03:48.343761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.775 [2024-11-18 15:03:48.343770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.775 [2024-11-18 15:03:48.343777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.775 [2024-11-18 15:03:48.343837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.775 [2024-11-18 15:03:48.343847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.775 [2024-11-18 15:03:48.343855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.775 [2024-11-18 15:03:48.343863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.775 [2024-11-18 15:03:48.343878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.775 [2024-11-18 15:03:48.343885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.775 [2024-11-18 15:03:48.343899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.775 [2024-11-18 15:03:48.343907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.775 [2024-11-18 15:03:48.355611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.775 [2024-11-18 15:03:48.355642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.775 [2024-11-18 15:03:48.355651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.775 [2024-11-18 15:03:48.355659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.035 [2024-11-18 15:03:48.360279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.035 [2024-11-18 15:03:48.360305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.035 [2024-11-18 15:03:48.360333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.036 [2024-11-18 15:03:48.360411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.036 [2024-11-18 15:03:48.360472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.036 [2024-11-18 15:03:48.360566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:25.036 [2024-11-18 15:03:48.360622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.036 [2024-11-18 15:03:48.360689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.036 [2024-11-18 15:03:48.360753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.036 [2024-11-18 15:03:48.360762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.036 [2024-11-18 15:03:48.360775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.036 [2024-11-18 15:03:48.360900] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.675 ms, result 0 00:17:25.036 00:17:25.036 00:17:25.036 15:03:48 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:27.578 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:17:27.579 15:03:50 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:17:27.579 [2024-11-18 15:03:50.630852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.579 [2024-11-18 15:03:50.630973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84505 ] 00:17:27.579 [2024-11-18 15:03:50.776370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.579 [2024-11-18 15:03:50.815808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.579 [2024-11-18 15:03:50.914868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.579 [2024-11-18 15:03:50.914946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.579 [2024-11-18 15:03:51.064389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.064431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.579 [2024-11-18 15:03:51.064445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:27.579 [2024-11-18 15:03:51.064452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.064497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.064507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.579 [2024-11-18 15:03:51.064516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:27.579 [2024-11-18 15:03:51.064529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.064545] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.579 [2024-11-18 15:03:51.064812] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.579 [2024-11-18 15:03:51.064874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.064884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.579 [2024-11-18 15:03:51.064895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:27.579 [2024-11-18 15:03:51.064904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.066294] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.579 [2024-11-18 15:03:51.069287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.069330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.579 [2024-11-18 15:03:51.069343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:17:27.579 [2024-11-18 15:03:51.069351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.069398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.069408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.579 [2024-11-18 15:03:51.069416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:27.579 [2024-11-18 15:03:51.069423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.075687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.075719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.579 [2024-11-18 15:03:51.075728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:17:27.579 [2024-11-18 15:03:51.075739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.075805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.075814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.579 [2024-11-18 15:03:51.075822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:27.579 [2024-11-18 15:03:51.075833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.075872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.075882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.579 [2024-11-18 15:03:51.075892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:27.579 [2024-11-18 15:03:51.075900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.075923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.579 [2024-11-18 15:03:51.077586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.077613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.579 [2024-11-18 15:03:51.077621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:17:27.579 [2024-11-18 15:03:51.077632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.077663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.077671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.579 [2024-11-18 15:03:51.077681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:27.579 [2024-11-18 15:03:51.077688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.077706] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.579 [2024-11-18 15:03:51.077726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:27.579 [2024-11-18 15:03:51.077759] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.579 [2024-11-18 15:03:51.077774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:27.579 [2024-11-18 15:03:51.077848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:27.579 [2024-11-18 15:03:51.077861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.579 [2024-11-18 15:03:51.077873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:27.579 [2024-11-18 15:03:51.077883] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.579 [2024-11-18 15:03:51.077893] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.579 [2024-11-18 15:03:51.077905] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:27.579 [2024-11-18 15:03:51.077912] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.579 [2024-11-18 15:03:51.077919] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:27.579 [2024-11-18 15:03:51.077926] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:27.579 [2024-11-18 15:03:51.077934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.077941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.579 [2024-11-18 15:03:51.077950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:27.579 [2024-11-18 15:03:51.077960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.078018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.579 [2024-11-18 15:03:51.078028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.579 [2024-11-18 15:03:51.078035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:27.579 [2024-11-18 15:03:51.078042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.579 [2024-11-18 15:03:51.078115] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.579 [2024-11-18 15:03:51.078124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.579 [2024-11-18 15:03:51.078133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.579 [2024-11-18 15:03:51.078160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.579 [2024-11-18 15:03:51.078182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.579 [2024-11-18 15:03:51.078195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.579 [2024-11-18 15:03:51.078201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:27.579 [2024-11-18 15:03:51.078209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.579 [2024-11-18 15:03:51.078221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.579 [2024-11-18 15:03:51.078227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:27.579 [2024-11-18 15:03:51.078235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.579 [2024-11-18 15:03:51.078252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:27.579 [2024-11-18 15:03:51.078260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:27.579 [2024-11-18 15:03:51.078276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:27.579 [2024-11-18 15:03:51.078283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.579 [2024-11-18 15:03:51.078299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078326] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.579 [2024-11-18 15:03:51.078335] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.579 [2024-11-18 15:03:51.078356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:27.579 [2024-11-18 15:03:51.078364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.579 [2024-11-18 15:03:51.078371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.579 [2024-11-18 15:03:51.078378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:27.580 [2024-11-18 15:03:51.078385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.580 [2024-11-18 15:03:51.078393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.580 [2024-11-18 15:03:51.078403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:27.580 [2024-11-18 15:03:51.078411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.580 [2024-11-18 15:03:51.078419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.580 [2024-11-18 15:03:51.078427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:27.580 [2024-11-18 15:03:51.078433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.580 [2024-11-18 15:03:51.078440] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.580 [2024-11-18 15:03:51.078448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.580 [2024-11-18 15:03:51.078457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.580 [2024-11-18 15:03:51.078465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.580 [2024-11-18 15:03:51.078476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.580 [2024-11-18 15:03:51.078484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.580 [2024-11-18 15:03:51.078491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.580 [2024-11-18 15:03:51.078499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.580 [2024-11-18 15:03:51.078508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.580 [2024-11-18 15:03:51.078516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.580 [2024-11-18 15:03:51.078524] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.580 [2024-11-18 15:03:51.078536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.580 [2024-11-18 15:03:51.078545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:27.580 [2024-11-18 15:03:51.078553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:27.580 [2024-11-18 15:03:51.078562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:27.580 [2024-11-18 15:03:51.078569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:27.580 [2024-11-18 15:03:51.078578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:27.580 [2024-11-18 15:03:51.078586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:27.580 [2024-11-18 15:03:51.078594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:27.580 [2024-11-18 15:03:51.078603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:27.580 [2024-11-18 15:03:51.078611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:27.580 [2024-11-18 15:03:51.078618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:27.580 [2024-11-18 15:03:51.078626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:27.580 [2024-11-18 15:03:51.078634] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:27.580 [2024-11-18 15:03:51.078643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:27.580 [2024-11-18 15:03:51.078651] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.580 [2024-11-18 15:03:51.078676] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.580 [2024-11-18 15:03:51.078689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.580 [2024-11-18 15:03:51.078696] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.580 [2024-11-18 15:03:51.078704] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.580 [2024-11-18 15:03:51.078711] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.580 [2024-11-18 15:03:51.078718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.078725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.580 [2024-11-18 15:03:51.078732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:17:27.580 [2024-11-18 15:03:51.078741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.086220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.086257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.580 [2024-11-18 15:03:51.086267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.444 ms 00:17:27.580 [2024-11-18 15:03:51.086275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.086374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.086384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.580 [2024-11-18 15:03:51.086393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:27.580 [2024-11-18 15:03:51.086402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.103813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.103858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.580 [2024-11-18 15:03:51.103873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.364 ms 00:17:27.580 [2024-11-18 15:03:51.103883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.103929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.103941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.580 [2024-11-18 15:03:51.103955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.580 [2024-11-18 15:03:51.103968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.104447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.104480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.580 [2024-11-18 15:03:51.104499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:27.580 [2024-11-18 15:03:51.104514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.104661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.104674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.580 [2024-11-18 15:03:51.104685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:27.580 [2024-11-18 15:03:51.104696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.111913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.111948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.580 [2024-11-18 15:03:51.111959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.190 ms 00:17:27.580 [2024-11-18 15:03:51.111975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.115274] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:27.580 [2024-11-18 15:03:51.115309] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.580 [2024-11-18 15:03:51.115348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.115356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.580 [2024-11-18 15:03:51.115364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:17:27.580 [2024-11-18 15:03:51.115371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.130188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.130241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:27.580 [2024-11-18 15:03:51.130256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:17:27.580 [2024-11-18 15:03:51.130264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.132218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.132249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:27.580 [2024-11-18 15:03:51.132258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:17:27.580 [2024-11-18 15:03:51.132265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.133910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.133940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:27.580 [2024-11-18 15:03:51.133952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:17:27.580 [2024-11-18 15:03:51.133959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.134146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.134162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:27.580 [2024-11-18 15:03:51.134176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:27.580 [2024-11-18 15:03:51.134189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.155268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.580 [2024-11-18 15:03:51.155308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:27.580 [2024-11-18 15:03:51.155339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.059 ms 00:17:27.580 [2024-11-18 15:03:51.155347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.580 [2024-11-18 15:03:51.162827] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:27.842 [2024-11-18 15:03:51.165525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.165553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:27.842 [2024-11-18 15:03:51.165564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.134 ms 00:17:27.842 [2024-11-18 15:03:51.165572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.165631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.165643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:27.842 [2024-11-18 15:03:51.165652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:27.842 [2024-11-18 15:03:51.165661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.165727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.165737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:27.842 [2024-11-18 15:03:51.165745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:27.842 [2024-11-18 15:03:51.165752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.167079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.167110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:27.842 [2024-11-18 15:03:51.167124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:17:27.842 [2024-11-18 15:03:51.167132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.167163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.167174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:27.842 [2024-11-18 15:03:51.167182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:27.842 [2024-11-18 15:03:51.167189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.167224] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:27.842 [2024-11-18 15:03:51.167234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.167244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:27.842 [2024-11-18 15:03:51.167257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:27.842 [2024-11-18 15:03:51.167264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.170771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.170802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:27.842 [2024-11-18 15:03:51.170816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:17:27.842 [2024-11-18 15:03:51.170828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.170892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.842 [2024-11-18 15:03:51.170905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:27.842 [2024-11-18 15:03:51.170914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:27.842 [2024-11-18 15:03:51.170925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.842 [2024-11-18 15:03:51.171892] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.082 ms, result 0 00:17:28.785  [2024-11-18T15:03:53.320Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-18T15:03:54.261Z] Copying: 86/1024 [MB] (44 MBps) [2024-11-18T15:03:55.200Z] Copying: 126/1024 [MB] (39 MBps) [2024-11-18T15:03:56.577Z] Copying: 172/1024 [MB] (46 MBps) [2024-11-18T15:03:57.524Z] Copying: 226/1024 [MB] (53 MBps) [2024-11-18T15:03:58.460Z] Copying: 258/1024 [MB] (32 MBps) [2024-11-18T15:03:59.403Z] Copying: 305/1024 [MB] (47 MBps) [2024-11-18T15:04:00.345Z] Copying: 333/1024 [MB] (28 MBps) [2024-11-18T15:04:01.283Z] Copying: 365/1024 [MB] (31 MBps) [2024-11-18T15:04:02.218Z] Copying: 400/1024 [MB] (35 MBps) [2024-11-18T15:04:03.595Z] Copying: 436/1024 [MB] (35 MBps) [2024-11-18T15:04:04.221Z] Copying: 477/1024 [MB] (40 MBps) [2024-11-18T15:04:05.602Z] Copying: 526/1024 [MB] (49 MBps) [2024-11-18T15:04:06.542Z] Copying: 561/1024 [MB] (35 MBps) [2024-11-18T15:04:07.484Z] Copying: 604/1024 [MB] (43 MBps) [2024-11-18T15:04:08.426Z] Copying: 650/1024 [MB] (45 MBps) [2024-11-18T15:04:09.369Z] Copying: 691/1024 [MB] (41 MBps) [2024-11-18T15:04:10.308Z] Copying: 730/1024 [MB] (38 MBps) [2024-11-18T15:04:11.246Z] Copying: 768/1024 [MB] (38 MBps) [2024-11-18T15:04:12.188Z] Copying: 807/1024 [MB] (39 MBps) [2024-11-18T15:04:13.563Z] Copying: 842/1024 [MB] (35 MBps) [2024-11-18T15:04:14.497Z] Copying: 889/1024 [MB] (46 MBps) [2024-11-18T15:04:15.428Z] Copying: 935/1024 [MB] (46 MBps) [2024-11-18T15:04:16.361Z] Copying: 981/1024 [MB] (45 MBps) [2024-11-18T15:04:17.299Z] Copying: 1023/1024 [MB] (41 MBps) [2024-11-18T15:04:17.299Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-11-18 15:04:16.985616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.709 [2024-11-18 15:04:16.985677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.709 [2024-11-18 15:04:16.985691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:53.709 [2024-11-18 15:04:16.985700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.709 [2024-11-18 15:04:16.987861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.709 [2024-11-18 15:04:16.990143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.709 [2024-11-18 15:04:16.990178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.709 [2024-11-18 15:04:16.990191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:17:53.710 [2024-11-18 15:04:16.990199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.002046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.002077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.710 [2024-11-18 15:04:17.002089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.871 ms 00:17:53.710 [2024-11-18 15:04:17.002097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.020509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.020545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.710 [2024-11-18 15:04:17.020556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:17:53.710 [2024-11-18 15:04:17.020564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.026651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.026689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:53.710 [2024-11-18 15:04:17.026699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.060 ms 00:17:53.710 [2024-11-18 15:04:17.026706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.028034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.028064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.710 [2024-11-18 15:04:17.028073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:17:53.710 [2024-11-18 15:04:17.028082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.031762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.031792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.710 [2024-11-18 15:04:17.031802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:17:53.710 [2024-11-18 15:04:17.031810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.083208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.083240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.710 [2024-11-18 15:04:17.083256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.364 ms 00:17:53.710 [2024-11-18 15:04:17.083264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.085020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.085050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:53.710 [2024-11-18 15:04:17.085059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:17:53.710 [2024-11-18 15:04:17.085066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.086264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.086293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:53.710 [2024-11-18 15:04:17.086302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:17:53.710 [2024-11-18 15:04:17.086309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.087424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.087453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.710 [2024-11-18 15:04:17.087462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:17:53.710 [2024-11-18 15:04:17.087469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.088505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.710 [2024-11-18 15:04:17.088533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.710 [2024-11-18 15:04:17.088542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:17:53.710 [2024-11-18 15:04:17.088549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.710 [2024-11-18 15:04:17.088575] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.710 [2024-11-18 15:04:17.088590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121600 / 261120 wr_cnt: 1 state: open 00:17:53.710 [2024-11-18 15:04:17.088600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.710 [2024-11-18 15:04:17.088989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.088997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.711 [2024-11-18 15:04:17.089369] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.711 [2024-11-18 15:04:17.089377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:17:53.711 [2024-11-18 15:04:17.089385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121600 00:17:53.711 [2024-11-18 15:04:17.089393] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122560 00:17:53.711 [2024-11-18 15:04:17.089402] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121600 00:17:53.711 [2024-11-18 15:04:17.089413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:17:53.711 [2024-11-18 15:04:17.089421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.711 [2024-11-18 15:04:17.089428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.711 [2024-11-18 15:04:17.089436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.711 [2024-11-18 15:04:17.089442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.711 [2024-11-18 15:04:17.089449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.711 [2024-11-18 15:04:17.089455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.711 [2024-11-18 15:04:17.089463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.711 [2024-11-18 15:04:17.089470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:17:53.711 [2024-11-18 15:04:17.089478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.091217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.711 [2024-11-18 15:04:17.091249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.711 [2024-11-18 15:04:17.091258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:17:53.711 [2024-11-18 15:04:17.091265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.091359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.711 [2024-11-18 15:04:17.091374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.711 [2024-11-18 15:04:17.091382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:53.711 [2024-11-18 15:04:17.091389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.097496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.097538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.711 [2024-11-18 15:04:17.097547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.097554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.097612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.097622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.711 [2024-11-18 15:04:17.097629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.097638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.097695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.097714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.711 [2024-11-18 15:04:17.097722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.097729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.097744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.097756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.711 [2024-11-18 15:04:17.097764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.097771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.108457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.108496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.711 [2024-11-18 15:04:17.108507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.108516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.112979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.113011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.711 [2024-11-18 15:04:17.113020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.113035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.113095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.113105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.711 [2024-11-18 15:04:17.113117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.711 [2024-11-18 15:04:17.113124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.711 [2024-11-18 15:04:17.113154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.711 [2024-11-18 15:04:17.113164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.712 [2024-11-18 15:04:17.113172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.712 [2024-11-18 15:04:17.113180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.712 [2024-11-18 15:04:17.113245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.712 [2024-11-18 15:04:17.113264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.712 [2024-11-18 15:04:17.113275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.712 [2024-11-18 15:04:17.113283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.712 [2024-11-18 15:04:17.113311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.712 [2024-11-18 15:04:17.113369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.712 [2024-11-18 15:04:17.113378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.712 [2024-11-18 15:04:17.113385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.712 [2024-11-18 15:04:17.113423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.712 [2024-11-18 15:04:17.113436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.712 [2024-11-18 15:04:17.113443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.712 [2024-11-18 15:04:17.113453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.712 [2024-11-18 15:04:17.113497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.712 [2024-11-18 15:04:17.113506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.712 [2024-11-18 15:04:17.113515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.712 [2024-11-18 15:04:17.113522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.712 [2024-11-18 15:04:17.113647] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 130.959 ms, result 0 00:17:55.086 00:17:55.086 00:17:55.086 15:04:18 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:17:55.086 [2024-11-18 15:04:18.409153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:55.086 [2024-11-18 15:04:18.409292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84801 ] 00:17:55.086 [2024-11-18 15:04:18.555340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.086 [2024-11-18 15:04:18.595380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.345 [2024-11-18 15:04:18.694648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.345 [2024-11-18 15:04:18.694743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.345 [2024-11-18 15:04:18.841396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.841444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.345 [2024-11-18 15:04:18.841458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.345 [2024-11-18 15:04:18.841469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.841521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.841532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.345 [2024-11-18 15:04:18.841541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:55.345 [2024-11-18 15:04:18.841551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.841574] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.345 [2024-11-18 15:04:18.841810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.345 [2024-11-18 15:04:18.841826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.841836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.345 [2024-11-18 15:04:18.841845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:55.345 [2024-11-18 15:04:18.841852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.843474] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:55.345 [2024-11-18 15:04:18.845994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.846025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:55.345 [2024-11-18 15:04:18.846040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:17:55.345 [2024-11-18 15:04:18.846048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.846096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.846106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:55.345 [2024-11-18 15:04:18.846114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:55.345 [2024-11-18 15:04:18.846121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.852396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.852429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.345 [2024-11-18 15:04:18.852438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:17:55.345 [2024-11-18 15:04:18.852445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.852513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.852523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.345 [2024-11-18 15:04:18.852534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:55.345 [2024-11-18 15:04:18.852542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.852578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.852592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.345 [2024-11-18 15:04:18.852602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.345 [2024-11-18 15:04:18.852610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.345 [2024-11-18 15:04:18.852634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.345 [2024-11-18 15:04:18.854263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.345 [2024-11-18 15:04:18.854288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.345 [2024-11-18 15:04:18.854298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:17:55.345 [2024-11-18 15:04:18.854305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.346 [2024-11-18 15:04:18.854350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.346 [2024-11-18 15:04:18.854359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.346 [2024-11-18 15:04:18.854370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:55.346 [2024-11-18 15:04:18.854377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.346 [2024-11-18 15:04:18.854396] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:55.346 [2024-11-18 15:04:18.854417] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:55.346 [2024-11-18 15:04:18.854450] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:55.346 [2024-11-18 15:04:18.854466] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:55.346 [2024-11-18 15:04:18.854543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:55.346 [2024-11-18 15:04:18.854562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.346 [2024-11-18 15:04:18.854575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:55.346 [2024-11-18 15:04:18.854588] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.346 [2024-11-18 15:04:18.854600] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.346 [2024-11-18 15:04:18.854609] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.346 [2024-11-18 15:04:18.854616] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.346 [2024-11-18 15:04:18.854627] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:55.346 [2024-11-18 15:04:18.854635] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:55.346 [2024-11-18 15:04:18.854642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.346 [2024-11-18 15:04:18.854655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.346 [2024-11-18 15:04:18.854663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:55.346 [2024-11-18 15:04:18.854680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.346 [2024-11-18 15:04:18.854739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.346 [2024-11-18 15:04:18.854748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.346 [2024-11-18 15:04:18.854761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:55.346 [2024-11-18 15:04:18.854769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.346 [2024-11-18 15:04:18.854842] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.346 [2024-11-18 15:04:18.854859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.346 [2024-11-18 15:04:18.854870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.346 [2024-11-18 15:04:18.854878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.346 [2024-11-18 15:04:18.854891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.346 [2024-11-18 15:04:18.854897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.346 [2024-11-18 15:04:18.854905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.346 [2024-11-18 15:04:18.854913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.346 [2024-11-18 15:04:18.854920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.346 [2024-11-18 15:04:18.854929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.346 [2024-11-18 15:04:18.854937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.346 [2024-11-18 15:04:18.854945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.346 [2024-11-18 15:04:18.854953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.346 [2024-11-18 15:04:18.854967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.346 [2024-11-18 15:04:18.854975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:55.346 [2024-11-18 15:04:18.854983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.346 [2024-11-18 15:04:18.854991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.346 [2024-11-18 15:04:18.854999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:55.346 [2024-11-18 15:04:18.855008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:55.346 [2024-11-18 15:04:18.855024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:55.346 [2024-11-18 15:04:18.855032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.346 [2024-11-18 15:04:18.855047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.346 [2024-11-18 15:04:18.855070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.346 [2024-11-18 15:04:18.855092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.346 [2024-11-18 15:04:18.855115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.346 [2024-11-18 15:04:18.855140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.346 [2024-11-18 15:04:18.855156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.346 [2024-11-18 15:04:18.855164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:55.346 [2024-11-18 15:04:18.855171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.346 [2024-11-18 15:04:18.855179] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.346 [2024-11-18 15:04:18.855187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.346 [2024-11-18 15:04:18.855196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.346 [2024-11-18 15:04:18.855213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.346 [2024-11-18 15:04:18.855221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.346 [2024-11-18 15:04:18.855228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.346 [2024-11-18 15:04:18.855236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.346 [2024-11-18 15:04:18.855243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.346 [2024-11-18 15:04:18.855251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.346 [2024-11-18 15:04:18.855263] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.346 [2024-11-18 15:04:18.855274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.346 [2024-11-18 15:04:18.855283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.346 [2024-11-18 15:04:18.855292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:55.346 [2024-11-18 15:04:18.855300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:55.346 [2024-11-18 15:04:18.855307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:55.346 [2024-11-18 15:04:18.855331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:55.346 [2024-11-18 15:04:18.855340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:55.346 [2024-11-18 15:04:18.855348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:55.346 [2024-11-18 15:04:18.855355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:55.346 [2024-11-18 15:04:18.855363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:55.346 [2024-11-18 15:04:18.855370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:55.346 [2024-11-18 15:04:18.855377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:55.346 [2024-11-18 15:04:18.855385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:55.346 [2024-11-18 15:04:18.855393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:55.346 [2024-11-18 15:04:18.855399] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.346 [2024-11-18 15:04:18.855410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.346 [2024-11-18 15:04:18.855418] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.346 [2024-11-18 15:04:18.855425] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.346 [2024-11-18 15:04:18.855433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.346 [2024-11-18 15:04:18.855440] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.346 [2024-11-18 15:04:18.855448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.855455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.347 [2024-11-18 15:04:18.855464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:17:55.347 [2024-11-18 15:04:18.855473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.863047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.863084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.347 [2024-11-18 15:04:18.863094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.527 ms 00:17:55.347 [2024-11-18 15:04:18.863102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.863188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.863197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:55.347 [2024-11-18 15:04:18.863207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:55.347 [2024-11-18 15:04:18.863218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.887231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.887274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.347 [2024-11-18 15:04:18.887293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.970 ms 00:17:55.347 [2024-11-18 15:04:18.887302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.887362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.887378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.347 [2024-11-18 15:04:18.887395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:55.347 [2024-11-18 15:04:18.887404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.887854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.887879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.347 [2024-11-18 15:04:18.887891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:17:55.347 [2024-11-18 15:04:18.887900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.888035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.888047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.347 [2024-11-18 15:04:18.888058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:55.347 [2024-11-18 15:04:18.888070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.895122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.895153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.347 [2024-11-18 15:04:18.895163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.028 ms 00:17:55.347 [2024-11-18 15:04:18.895172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.897877] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:17:55.347 [2024-11-18 15:04:18.897914] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:55.347 [2024-11-18 15:04:18.897926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.897935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:55.347 [2024-11-18 15:04:18.897945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:17:55.347 [2024-11-18 15:04:18.897954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.912682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.912715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:55.347 [2024-11-18 15:04:18.912733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.690 ms 00:17:55.347 [2024-11-18 15:04:18.912744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.914493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.914522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:55.347 [2024-11-18 15:04:18.914530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:17:55.347 [2024-11-18 15:04:18.914538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.915966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.915995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:55.347 [2024-11-18 15:04:18.916007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:17:55.347 [2024-11-18 15:04:18.916014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.347 [2024-11-18 15:04:18.916208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.347 [2024-11-18 15:04:18.916225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:55.347 [2024-11-18 15:04:18.916236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:55.347 [2024-11-18 15:04:18.916244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.605 [2024-11-18 15:04:18.937524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.605 [2024-11-18 15:04:18.937571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:55.605 [2024-11-18 15:04:18.937583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.259 ms 00:17:55.605 [2024-11-18 15:04:18.937591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.605 [2024-11-18 15:04:18.945119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:55.605 [2024-11-18 15:04:18.947726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.605 [2024-11-18 15:04:18.947754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:55.605 [2024-11-18 15:04:18.947766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.091 ms 00:17:55.605 [2024-11-18 15:04:18.947773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.605 [2024-11-18 15:04:18.947836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.605 [2024-11-18 15:04:18.947848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:55.605 [2024-11-18 15:04:18.947857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.605 [2024-11-18 15:04:18.947865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.605 [2024-11-18 15:04:18.949198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.605 [2024-11-18 15:04:18.949229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:55.605 [2024-11-18 15:04:18.949238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:17:55.606 [2024-11-18 15:04:18.949247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.950659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.606 [2024-11-18 15:04:18.950695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:55.606 [2024-11-18 15:04:18.950704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:17:55.606 [2024-11-18 15:04:18.950711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.950745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.606 [2024-11-18 15:04:18.950757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:55.606 [2024-11-18 15:04:18.950766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.606 [2024-11-18 15:04:18.950773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.950807] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:55.606 [2024-11-18 15:04:18.950817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.606 [2024-11-18 15:04:18.950825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:55.606 [2024-11-18 15:04:18.950834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:55.606 [2024-11-18 15:04:18.950841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.954375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.606 [2024-11-18 15:04:18.954406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:55.606 [2024-11-18 15:04:18.954421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:17:55.606 [2024-11-18 15:04:18.954430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.954497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.606 [2024-11-18 15:04:18.954510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:55.606 [2024-11-18 15:04:18.954519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:55.606 [2024-11-18 15:04:18.954526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.606 [2024-11-18 15:04:18.961993] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.628 ms, result 0 00:17:56.984  [2024-11-18T15:04:21.147Z] Copying: 38/1024 [MB] (38 MBps) [2024-11-18T15:04:22.555Z] Copying: 65/1024 [MB] (27 MBps) [2024-11-18T15:04:23.492Z] Copying: 100/1024 [MB] (35 MBps) [2024-11-18T15:04:24.503Z] Copying: 136/1024 [MB] (35 MBps) [2024-11-18T15:04:25.437Z] Copying: 183/1024 [MB] (46 MBps) [2024-11-18T15:04:26.373Z] Copying: 229/1024 [MB] (46 MBps) [2024-11-18T15:04:27.308Z] Copying: 276/1024 [MB] (46 MBps) [2024-11-18T15:04:28.238Z] Copying: 322/1024 [MB] (46 MBps) [2024-11-18T15:04:29.170Z] Copying: 369/1024 [MB] (46 MBps) [2024-11-18T15:04:30.544Z] Copying: 417/1024 [MB] (48 MBps) [2024-11-18T15:04:31.479Z] Copying: 465/1024 [MB] (47 MBps) [2024-11-18T15:04:32.414Z] Copying: 515/1024 [MB] (50 MBps) [2024-11-18T15:04:33.349Z] Copying: 563/1024 [MB] (48 MBps) [2024-11-18T15:04:34.283Z] Copying: 614/1024 [MB] (50 MBps) [2024-11-18T15:04:35.216Z] Copying: 662/1024 [MB] (48 MBps) [2024-11-18T15:04:36.150Z] Copying: 711/1024 [MB] (48 MBps) [2024-11-18T15:04:37.523Z] Copying: 759/1024 [MB] (47 MBps) [2024-11-18T15:04:38.458Z] Copying: 811/1024 [MB] (52 MBps) [2024-11-18T15:04:39.393Z] Copying: 858/1024 [MB] (47 MBps) [2024-11-18T15:04:40.328Z] Copying: 905/1024 [MB] (47 MBps) [2024-11-18T15:04:41.262Z] Copying: 955/1024 [MB] (49 MBps) [2024-11-18T15:04:41.521Z] Copying: 1007/1024 [MB] (51 MBps) [2024-11-18T15:04:41.780Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-18 15:04:41.699972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.700074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.190 [2024-11-18 15:04:41.700103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:18.190 [2024-11-18 15:04:41.700121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.700167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.190 [2024-11-18 15:04:41.700984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.701041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.190 [2024-11-18 15:04:41.701063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:18:18.190 [2024-11-18 15:04:41.701082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.701621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.701660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.190 [2024-11-18 15:04:41.701678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:18:18.190 [2024-11-18 15:04:41.701695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.713010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.713062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.190 [2024-11-18 15:04:41.713080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.283 ms 00:18:18.190 [2024-11-18 15:04:41.713093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.729176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.729226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:18.190 [2024-11-18 15:04:41.729238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.656 ms 00:18:18.190 [2024-11-18 15:04:41.729246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.731542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.731584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.190 [2024-11-18 15:04:41.731596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:18:18.190 [2024-11-18 15:04:41.731604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.190 [2024-11-18 15:04:41.735273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.190 [2024-11-18 15:04:41.735306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.190 [2024-11-18 15:04:41.735328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:18:18.190 [2024-11-18 15:04:41.735337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.788472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.451 [2024-11-18 15:04:41.788526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.451 [2024-11-18 15:04:41.788537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.099 ms 00:18:18.451 [2024-11-18 15:04:41.788545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.790566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.451 [2024-11-18 15:04:41.790600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:18.451 [2024-11-18 15:04:41.790609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:18:18.451 [2024-11-18 15:04:41.790617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.791842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.451 [2024-11-18 15:04:41.791874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:18.451 [2024-11-18 15:04:41.791884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:18:18.451 [2024-11-18 15:04:41.791891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.792762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.451 [2024-11-18 15:04:41.792805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.451 [2024-11-18 15:04:41.792814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:18:18.451 [2024-11-18 15:04:41.792821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.793645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.451 [2024-11-18 15:04:41.793675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.451 [2024-11-18 15:04:41.793684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:18:18.451 [2024-11-18 15:04:41.793691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.451 [2024-11-18 15:04:41.793718] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.451 [2024-11-18 15:04:41.793742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:18:18.451 [2024-11-18 15:04:41.793753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.451 [2024-11-18 15:04:41.793762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.793992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.452 [2024-11-18 15:04:41.794430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.453 [2024-11-18 15:04:41.794533] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.453 [2024-11-18 15:04:41.794542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ffa0f95-3a55-45bb-8c34-aba5a0743530 00:18:18.453 [2024-11-18 15:04:41.794549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:18:18.453 [2024-11-18 15:04:41.794556] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12992 00:18:18.453 [2024-11-18 15:04:41.794562] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12032 00:18:18.453 [2024-11-18 15:04:41.794576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0798 00:18:18.453 [2024-11-18 15:04:41.794583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.453 [2024-11-18 15:04:41.794591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.453 [2024-11-18 15:04:41.794602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.453 [2024-11-18 15:04:41.794608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.453 [2024-11-18 15:04:41.794614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.453 [2024-11-18 15:04:41.794621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.453 [2024-11-18 15:04:41.794629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.453 [2024-11-18 15:04:41.794637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:18:18.453 [2024-11-18 15:04:41.794649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.796500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.453 [2024-11-18 15:04:41.796528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.453 [2024-11-18 15:04:41.796542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:18:18.453 [2024-11-18 15:04:41.796555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.796627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.453 [2024-11-18 15:04:41.796643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.453 [2024-11-18 15:04:41.796652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:18.453 [2024-11-18 15:04:41.796660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.802898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.802931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.453 [2024-11-18 15:04:41.802939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.802948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.803003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.803013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.453 [2024-11-18 15:04:41.803021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.803029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.803108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.803131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.453 [2024-11-18 15:04:41.803139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.803147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.803162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.803170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.453 [2024-11-18 15:04:41.803178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.803186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.814161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.814202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.453 [2024-11-18 15:04:41.814213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.814222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.818903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.818936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.453 [2024-11-18 15:04:41.818946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.818954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.453 [2024-11-18 15:04:41.819032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.453 [2024-11-18 15:04:41.819092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.453 [2024-11-18 15:04:41.819183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.453 [2024-11-18 15:04:41.819242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.453 [2024-11-18 15:04:41.819337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.453 [2024-11-18 15:04:41.819403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.453 [2024-11-18 15:04:41.819411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.453 [2024-11-18 15:04:41.819418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.453 [2024-11-18 15:04:41.819544] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 119.563 ms, result 0 00:18:18.712 00:18:18.712 00:18:18.712 15:04:42 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:20.615 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:20.615 15:04:44 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:18:20.615 15:04:44 -- ftl/restore.sh@85 -- # restore_kill 00:18:20.615 15:04:44 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:20.874 15:04:44 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:20.874 15:04:44 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.874 Process with pid 83794 is not found 00:18:20.874 Remove shared memory files 00:18:20.874 15:04:44 -- ftl/restore.sh@32 -- # killprocess 83794 00:18:20.874 15:04:44 -- common/autotest_common.sh@936 -- # '[' -z 83794 ']' 00:18:20.874 15:04:44 -- common/autotest_common.sh@940 -- # kill -0 83794 00:18:20.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83794) - No such process 00:18:20.874 15:04:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83794 is not found' 00:18:20.874 15:04:44 -- ftl/restore.sh@33 -- # remove_shm 00:18:20.874 15:04:44 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:20.874 15:04:44 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:20.874 15:04:44 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:20.874 15:04:44 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:20.874 15:04:44 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:20.874 15:04:44 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:20.874 00:18:20.874 real 1m58.129s 00:18:20.874 user 1m48.328s 00:18:20.874 sys 0m11.343s 00:18:20.874 15:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:20.874 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:18:20.874 ************************************ 00:18:20.874 END TEST ftl_restore 00:18:20.874 ************************************ 00:18:20.874 15:04:44 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:20.874 15:04:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:20.874 15:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.874 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:18:20.874 ************************************ 00:18:20.874 START TEST ftl_dirty_shutdown 00:18:20.874 ************************************ 00:18:20.874 15:04:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:20.874 * Looking for test storage... 00:18:20.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.874 15:04:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:20.874 15:04:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:20.874 15:04:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:20.874 15:04:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:20.874 15:04:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:20.874 15:04:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:20.874 15:04:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:20.874 15:04:44 -- scripts/common.sh@335 -- # IFS=.-: 00:18:20.874 15:04:44 -- scripts/common.sh@335 -- # read -ra ver1 00:18:20.874 15:04:44 -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.874 15:04:44 -- scripts/common.sh@336 -- # read -ra ver2 00:18:20.874 15:04:44 -- scripts/common.sh@337 -- # local 'op=<' 00:18:20.874 15:04:44 -- scripts/common.sh@339 -- # ver1_l=2 00:18:20.874 15:04:44 -- scripts/common.sh@340 -- # ver2_l=1 00:18:20.874 15:04:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:20.874 15:04:44 -- scripts/common.sh@343 -- # case "$op" in 00:18:20.874 15:04:44 -- scripts/common.sh@344 -- # : 1 00:18:20.874 15:04:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:20.874 15:04:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.874 15:04:44 -- scripts/common.sh@364 -- # decimal 1 00:18:20.874 15:04:44 -- scripts/common.sh@352 -- # local d=1 00:18:20.874 15:04:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.874 15:04:44 -- scripts/common.sh@354 -- # echo 1 00:18:20.874 15:04:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:21.134 15:04:44 -- scripts/common.sh@365 -- # decimal 2 00:18:21.134 15:04:44 -- scripts/common.sh@352 -- # local d=2 00:18:21.134 15:04:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.134 15:04:44 -- scripts/common.sh@354 -- # echo 2 00:18:21.134 15:04:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:21.134 15:04:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:21.134 15:04:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:21.134 15:04:44 -- scripts/common.sh@367 -- # return 0 00:18:21.134 15:04:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.134 15:04:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:21.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.134 --rc genhtml_branch_coverage=1 00:18:21.134 --rc genhtml_function_coverage=1 00:18:21.134 --rc genhtml_legend=1 00:18:21.134 --rc geninfo_all_blocks=1 00:18:21.134 --rc geninfo_unexecuted_blocks=1 00:18:21.134 00:18:21.134 ' 00:18:21.134 15:04:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:21.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.134 --rc genhtml_branch_coverage=1 00:18:21.134 --rc genhtml_function_coverage=1 00:18:21.134 --rc genhtml_legend=1 00:18:21.134 --rc geninfo_all_blocks=1 00:18:21.134 --rc geninfo_unexecuted_blocks=1 00:18:21.134 00:18:21.134 ' 00:18:21.134 15:04:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:21.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.134 --rc genhtml_branch_coverage=1 00:18:21.134 --rc genhtml_function_coverage=1 00:18:21.134 --rc genhtml_legend=1 00:18:21.134 --rc geninfo_all_blocks=1 00:18:21.134 --rc geninfo_unexecuted_blocks=1 00:18:21.134 00:18:21.134 ' 00:18:21.134 15:04:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:21.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.134 --rc genhtml_branch_coverage=1 00:18:21.134 --rc genhtml_function_coverage=1 00:18:21.134 --rc genhtml_legend=1 00:18:21.134 --rc geninfo_all_blocks=1 00:18:21.134 --rc geninfo_unexecuted_blocks=1 00:18:21.134 00:18:21.134 ' 00:18:21.134 15:04:44 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:21.134 15:04:44 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:18:21.134 15:04:44 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.134 15:04:44 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.134 15:04:44 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:21.134 15:04:44 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:21.134 15:04:44 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.134 15:04:44 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:21.134 15:04:44 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:21.134 15:04:44 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.134 15:04:44 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.134 15:04:44 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:21.134 15:04:44 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:21.134 15:04:44 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.134 15:04:44 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.134 15:04:44 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:21.134 15:04:44 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:21.135 15:04:44 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.135 15:04:44 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.135 15:04:44 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:21.135 15:04:44 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:21.135 15:04:44 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.135 15:04:44 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.135 15:04:44 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.135 15:04:44 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.135 15:04:44 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:21.135 15:04:44 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:21.135 15:04:44 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.135 15:04:44 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@45 -- # svcpid=85132 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 85132 00:18:21.135 15:04:44 -- common/autotest_common.sh@829 -- # '[' -z 85132 ']' 00:18:21.135 15:04:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.135 15:04:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.135 15:04:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.135 15:04:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.135 15:04:44 -- common/autotest_common.sh@10 -- # set +x 00:18:21.135 15:04:44 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:21.135 [2024-11-18 15:04:44.543010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:21.135 [2024-11-18 15:04:44.543101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85132 ] 00:18:21.135 [2024-11-18 15:04:44.680221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.393 [2024-11-18 15:04:44.721811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.393 [2024-11-18 15:04:44.722011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.960 15:04:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.960 15:04:45 -- common/autotest_common.sh@862 -- # return 0 00:18:21.960 15:04:45 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:21.960 15:04:45 -- ftl/common.sh@54 -- # local name=nvme0 00:18:21.960 15:04:45 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:21.960 15:04:45 -- ftl/common.sh@56 -- # local size=103424 00:18:21.960 15:04:45 -- ftl/common.sh@59 -- # local base_bdev 00:18:21.960 15:04:45 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:22.219 15:04:45 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:22.219 15:04:45 -- ftl/common.sh@62 -- # local base_size 00:18:22.219 15:04:45 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:22.219 15:04:45 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:18:22.219 15:04:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:22.219 15:04:45 -- common/autotest_common.sh@1369 -- # local bs 00:18:22.219 15:04:45 -- common/autotest_common.sh@1370 -- # local nb 00:18:22.219 15:04:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:22.478 15:04:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:22.478 { 00:18:22.478 "name": "nvme0n1", 00:18:22.478 "aliases": [ 00:18:22.478 "18754f64-fbcd-4a90-a719-8a7f5f0775b2" 00:18:22.478 ], 00:18:22.478 "product_name": "NVMe disk", 00:18:22.478 "block_size": 4096, 00:18:22.478 "num_blocks": 1310720, 00:18:22.478 "uuid": "18754f64-fbcd-4a90-a719-8a7f5f0775b2", 00:18:22.478 "assigned_rate_limits": { 00:18:22.478 "rw_ios_per_sec": 0, 00:18:22.478 "rw_mbytes_per_sec": 0, 00:18:22.478 "r_mbytes_per_sec": 0, 00:18:22.478 "w_mbytes_per_sec": 0 00:18:22.478 }, 00:18:22.478 "claimed": true, 00:18:22.478 "claim_type": "read_many_write_one", 00:18:22.478 "zoned": false, 00:18:22.478 "supported_io_types": { 00:18:22.478 "read": true, 00:18:22.478 "write": true, 00:18:22.478 "unmap": true, 00:18:22.478 "write_zeroes": true, 00:18:22.478 "flush": true, 00:18:22.478 "reset": true, 00:18:22.478 "compare": true, 00:18:22.478 "compare_and_write": false, 00:18:22.478 "abort": true, 00:18:22.478 "nvme_admin": true, 00:18:22.478 "nvme_io": true 00:18:22.478 }, 00:18:22.478 "driver_specific": { 00:18:22.478 "nvme": [ 00:18:22.478 { 00:18:22.478 "pci_address": "0000:00:07.0", 00:18:22.478 "trid": { 00:18:22.478 "trtype": "PCIe", 00:18:22.478 "traddr": "0000:00:07.0" 00:18:22.478 }, 00:18:22.478 "ctrlr_data": { 00:18:22.478 "cntlid": 0, 00:18:22.478 "vendor_id": "0x1b36", 00:18:22.478 "model_number": "QEMU NVMe Ctrl", 00:18:22.478 "serial_number": "12341", 00:18:22.478 "firmware_revision": "8.0.0", 00:18:22.478 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:22.478 "oacs": { 00:18:22.478 "security": 0, 00:18:22.478 "format": 1, 00:18:22.478 "firmware": 0, 00:18:22.478 "ns_manage": 1 00:18:22.478 }, 00:18:22.478 "multi_ctrlr": false, 00:18:22.478 "ana_reporting": false 00:18:22.478 }, 00:18:22.478 "vs": { 00:18:22.478 "nvme_version": "1.4" 00:18:22.478 }, 00:18:22.478 "ns_data": { 00:18:22.478 "id": 1, 00:18:22.478 "can_share": false 00:18:22.478 } 00:18:22.478 } 00:18:22.478 ], 00:18:22.478 "mp_policy": "active_passive" 00:18:22.478 } 00:18:22.478 } 00:18:22.478 ]' 00:18:22.478 15:04:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:22.478 15:04:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:22.478 15:04:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:22.478 15:04:45 -- common/autotest_common.sh@1373 -- # nb=1310720 00:18:22.478 15:04:45 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:18:22.478 15:04:45 -- common/autotest_common.sh@1377 -- # echo 5120 00:18:22.478 15:04:45 -- ftl/common.sh@63 -- # base_size=5120 00:18:22.478 15:04:45 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:22.478 15:04:45 -- ftl/common.sh@67 -- # clear_lvols 00:18:22.478 15:04:45 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:22.478 15:04:45 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:22.737 15:04:46 -- ftl/common.sh@28 -- # stores=3e21963b-1bb6-4fce-a624-2c6dd78fec4f 00:18:22.737 15:04:46 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:22.737 15:04:46 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e21963b-1bb6-4fce-a624-2c6dd78fec4f 00:18:22.737 15:04:46 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:22.996 15:04:46 -- ftl/common.sh@68 -- # lvs=3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0 00:18:22.996 15:04:46 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0 00:18:23.255 15:04:46 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:18:23.255 15:04:46 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- ftl/common.sh@35 -- # local name=nvc0 00:18:23.255 15:04:46 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:23.255 15:04:46 -- ftl/common.sh@37 -- # local base_bdev=87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- ftl/common.sh@38 -- # local cache_size= 00:18:23.255 15:04:46 -- ftl/common.sh@41 -- # get_bdev_size 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- common/autotest_common.sh@1367 -- # local bdev_name=87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:23.255 15:04:46 -- common/autotest_common.sh@1369 -- # local bs 00:18:23.255 15:04:46 -- common/autotest_common.sh@1370 -- # local nb 00:18:23.255 15:04:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.255 15:04:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:23.255 { 00:18:23.255 "name": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:23.255 "aliases": [ 00:18:23.255 "lvs/nvme0n1p0" 00:18:23.255 ], 00:18:23.255 "product_name": "Logical Volume", 00:18:23.255 "block_size": 4096, 00:18:23.255 "num_blocks": 26476544, 00:18:23.255 "uuid": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:23.255 "assigned_rate_limits": { 00:18:23.255 "rw_ios_per_sec": 0, 00:18:23.255 "rw_mbytes_per_sec": 0, 00:18:23.255 "r_mbytes_per_sec": 0, 00:18:23.255 "w_mbytes_per_sec": 0 00:18:23.255 }, 00:18:23.255 "claimed": false, 00:18:23.255 "zoned": false, 00:18:23.255 "supported_io_types": { 00:18:23.255 "read": true, 00:18:23.255 "write": true, 00:18:23.255 "unmap": true, 00:18:23.255 "write_zeroes": true, 00:18:23.255 "flush": false, 00:18:23.255 "reset": true, 00:18:23.255 "compare": false, 00:18:23.255 "compare_and_write": false, 00:18:23.255 "abort": false, 00:18:23.255 "nvme_admin": false, 00:18:23.255 "nvme_io": false 00:18:23.255 }, 00:18:23.255 "driver_specific": { 00:18:23.255 "lvol": { 00:18:23.256 "lvol_store_uuid": "3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0", 00:18:23.256 "base_bdev": "nvme0n1", 00:18:23.256 "thin_provision": true, 00:18:23.256 "snapshot": false, 00:18:23.256 "clone": false, 00:18:23.256 "esnap_clone": false 00:18:23.256 } 00:18:23.256 } 00:18:23.256 } 00:18:23.256 ]' 00:18:23.256 15:04:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:23.256 15:04:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:23.256 15:04:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:23.514 15:04:46 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:23.514 15:04:46 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:23.514 15:04:46 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:23.514 15:04:46 -- ftl/common.sh@41 -- # local base_size=5171 00:18:23.514 15:04:46 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:23.514 15:04:46 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:23.773 15:04:47 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:23.773 15:04:47 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:23.773 15:04:47 -- ftl/common.sh@48 -- # get_bdev_size 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.773 15:04:47 -- common/autotest_common.sh@1367 -- # local bdev_name=87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:23.773 15:04:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:23.773 15:04:47 -- common/autotest_common.sh@1369 -- # local bs 00:18:23.773 15:04:47 -- common/autotest_common.sh@1370 -- # local nb 00:18:23.773 15:04:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:24.031 15:04:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:24.031 { 00:18:24.031 "name": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:24.031 "aliases": [ 00:18:24.031 "lvs/nvme0n1p0" 00:18:24.031 ], 00:18:24.031 "product_name": "Logical Volume", 00:18:24.031 "block_size": 4096, 00:18:24.031 "num_blocks": 26476544, 00:18:24.031 "uuid": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:24.031 "assigned_rate_limits": { 00:18:24.031 "rw_ios_per_sec": 0, 00:18:24.031 "rw_mbytes_per_sec": 0, 00:18:24.031 "r_mbytes_per_sec": 0, 00:18:24.031 "w_mbytes_per_sec": 0 00:18:24.031 }, 00:18:24.032 "claimed": false, 00:18:24.032 "zoned": false, 00:18:24.032 "supported_io_types": { 00:18:24.032 "read": true, 00:18:24.032 "write": true, 00:18:24.032 "unmap": true, 00:18:24.032 "write_zeroes": true, 00:18:24.032 "flush": false, 00:18:24.032 "reset": true, 00:18:24.032 "compare": false, 00:18:24.032 "compare_and_write": false, 00:18:24.032 "abort": false, 00:18:24.032 "nvme_admin": false, 00:18:24.032 "nvme_io": false 00:18:24.032 }, 00:18:24.032 "driver_specific": { 00:18:24.032 "lvol": { 00:18:24.032 "lvol_store_uuid": "3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0", 00:18:24.032 "base_bdev": "nvme0n1", 00:18:24.032 "thin_provision": true, 00:18:24.032 "snapshot": false, 00:18:24.032 "clone": false, 00:18:24.032 "esnap_clone": false 00:18:24.032 } 00:18:24.032 } 00:18:24.032 } 00:18:24.032 ]' 00:18:24.032 15:04:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:24.032 15:04:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:24.032 15:04:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:24.032 15:04:47 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:24.032 15:04:47 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:24.032 15:04:47 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:24.032 15:04:47 -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.032 15:04:47 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:24.291 15:04:47 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:18:24.291 15:04:47 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:24.291 15:04:47 -- common/autotest_common.sh@1367 -- # local bdev_name=87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:24.291 15:04:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:24.291 15:04:47 -- common/autotest_common.sh@1369 -- # local bs 00:18:24.291 15:04:47 -- common/autotest_common.sh@1370 -- # local nb 00:18:24.291 15:04:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 00:18:24.291 15:04:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:24.291 { 00:18:24.291 "name": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:24.291 "aliases": [ 00:18:24.291 "lvs/nvme0n1p0" 00:18:24.291 ], 00:18:24.291 "product_name": "Logical Volume", 00:18:24.291 "block_size": 4096, 00:18:24.291 "num_blocks": 26476544, 00:18:24.291 "uuid": "87fff98f-9c49-4cbf-aca5-e14dde5ae8b3", 00:18:24.291 "assigned_rate_limits": { 00:18:24.291 "rw_ios_per_sec": 0, 00:18:24.291 "rw_mbytes_per_sec": 0, 00:18:24.291 "r_mbytes_per_sec": 0, 00:18:24.291 "w_mbytes_per_sec": 0 00:18:24.291 }, 00:18:24.291 "claimed": false, 00:18:24.291 "zoned": false, 00:18:24.291 "supported_io_types": { 00:18:24.291 "read": true, 00:18:24.291 "write": true, 00:18:24.291 "unmap": true, 00:18:24.291 "write_zeroes": true, 00:18:24.291 "flush": false, 00:18:24.291 "reset": true, 00:18:24.291 "compare": false, 00:18:24.291 "compare_and_write": false, 00:18:24.291 "abort": false, 00:18:24.291 "nvme_admin": false, 00:18:24.291 "nvme_io": false 00:18:24.291 }, 00:18:24.291 "driver_specific": { 00:18:24.291 "lvol": { 00:18:24.291 "lvol_store_uuid": "3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0", 00:18:24.291 "base_bdev": "nvme0n1", 00:18:24.291 "thin_provision": true, 00:18:24.291 "snapshot": false, 00:18:24.291 "clone": false, 00:18:24.291 "esnap_clone": false 00:18:24.291 } 00:18:24.291 } 00:18:24.291 } 00:18:24.291 ]' 00:18:24.291 15:04:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:24.291 15:04:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:24.291 15:04:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:24.550 15:04:47 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:24.550 15:04:47 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:24.550 15:04:47 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 --l2p_dram_limit 10' 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:24.550 15:04:47 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 87fff98f-9c49-4cbf-aca5-e14dde5ae8b3 --l2p_dram_limit 10 -c nvc0n1p0 00:18:24.809 [2024-11-18 15:04:48.143046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.143250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.809 [2024-11-18 15:04:48.143275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.809 [2024-11-18 15:04:48.143282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.143370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.143383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.809 [2024-11-18 15:04:48.143395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:24.809 [2024-11-18 15:04:48.143402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.143423] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.809 [2024-11-18 15:04:48.143674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.809 [2024-11-18 15:04:48.143688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.143695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.809 [2024-11-18 15:04:48.143703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:18:24.809 [2024-11-18 15:04:48.143710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.143780] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 10589772-454a-40e2-9fb5-d6d6cd01fe37 00:18:24.809 [2024-11-18 15:04:48.145046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.145078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:24.809 [2024-11-18 15:04:48.145086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:24.809 [2024-11-18 15:04:48.145094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.151830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.151863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.809 [2024-11-18 15:04:48.151871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.700 ms 00:18:24.809 [2024-11-18 15:04:48.151885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.151962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.151974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.809 [2024-11-18 15:04:48.151981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:24.809 [2024-11-18 15:04:48.151990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.152039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.152050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.809 [2024-11-18 15:04:48.152058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:24.809 [2024-11-18 15:04:48.152065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.152087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.809 [2024-11-18 15:04:48.153717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.153873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.809 [2024-11-18 15:04:48.153888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:18:24.809 [2024-11-18 15:04:48.153896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.809 [2024-11-18 15:04:48.153934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.809 [2024-11-18 15:04:48.153942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.809 [2024-11-18 15:04:48.153954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:24.809 [2024-11-18 15:04:48.153960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.810 [2024-11-18 15:04:48.153976] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:24.810 [2024-11-18 15:04:48.154068] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:24.810 [2024-11-18 15:04:48.154080] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.810 [2024-11-18 15:04:48.154090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:24.810 [2024-11-18 15:04:48.154107] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154118] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154126] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:24.810 [2024-11-18 15:04:48.154132] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.810 [2024-11-18 15:04:48.154139] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:24.810 [2024-11-18 15:04:48.154144] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:24.810 [2024-11-18 15:04:48.154153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.810 [2024-11-18 15:04:48.154158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.810 [2024-11-18 15:04:48.154166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:18:24.810 [2024-11-18 15:04:48.154171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.810 [2024-11-18 15:04:48.154228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.810 [2024-11-18 15:04:48.154234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.810 [2024-11-18 15:04:48.154242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:24.810 [2024-11-18 15:04:48.154248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.810 [2024-11-18 15:04:48.154308] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.810 [2024-11-18 15:04:48.154331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.810 [2024-11-18 15:04:48.154341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.810 [2024-11-18 15:04:48.154366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.810 [2024-11-18 15:04:48.154386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.810 [2024-11-18 15:04:48.154399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.810 [2024-11-18 15:04:48.154404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:24.810 [2024-11-18 15:04:48.154413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.810 [2024-11-18 15:04:48.154421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.810 [2024-11-18 15:04:48.154428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:24.810 [2024-11-18 15:04:48.154434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.810 [2024-11-18 15:04:48.154448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:24.810 [2024-11-18 15:04:48.154455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:24.810 [2024-11-18 15:04:48.154468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:24.810 [2024-11-18 15:04:48.154474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.810 [2024-11-18 15:04:48.154488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.810 [2024-11-18 15:04:48.154509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.810 [2024-11-18 15:04:48.154532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.810 [2024-11-18 15:04:48.154555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.810 [2024-11-18 15:04:48.154576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.810 [2024-11-18 15:04:48.154589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.810 [2024-11-18 15:04:48.154596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:24.810 [2024-11-18 15:04:48.154602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.810 [2024-11-18 15:04:48.154609] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.810 [2024-11-18 15:04:48.154618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.810 [2024-11-18 15:04:48.154627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.810 [2024-11-18 15:04:48.154643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.810 [2024-11-18 15:04:48.154648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.810 [2024-11-18 15:04:48.154656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.810 [2024-11-18 15:04:48.154663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.810 [2024-11-18 15:04:48.154670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.810 [2024-11-18 15:04:48.154676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.810 [2024-11-18 15:04:48.154694] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.810 [2024-11-18 15:04:48.154715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.810 [2024-11-18 15:04:48.154725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:24.810 [2024-11-18 15:04:48.154732] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:24.810 [2024-11-18 15:04:48.154740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:24.810 [2024-11-18 15:04:48.154747] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:24.810 [2024-11-18 15:04:48.154755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:24.810 [2024-11-18 15:04:48.154762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:24.810 [2024-11-18 15:04:48.154770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:24.810 [2024-11-18 15:04:48.154776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:24.810 [2024-11-18 15:04:48.154786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:24.810 [2024-11-18 15:04:48.154792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:24.810 [2024-11-18 15:04:48.154799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:24.810 [2024-11-18 15:04:48.154806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:24.810 [2024-11-18 15:04:48.154813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:24.810 [2024-11-18 15:04:48.154819] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.810 [2024-11-18 15:04:48.154828] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.810 [2024-11-18 15:04:48.154835] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.810 [2024-11-18 15:04:48.154842] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.810 [2024-11-18 15:04:48.154848] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.810 [2024-11-18 15:04:48.154854] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.810 [2024-11-18 15:04:48.154860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.810 [2024-11-18 15:04:48.154867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.810 [2024-11-18 15:04:48.154874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:18:24.810 [2024-11-18 15:04:48.154884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.810 [2024-11-18 15:04:48.161982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.810 [2024-11-18 15:04:48.162016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.810 [2024-11-18 15:04:48.162024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.062 ms 00:18:24.810 [2024-11-18 15:04:48.162036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.810 [2024-11-18 15:04:48.162114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.162123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.811 [2024-11-18 15:04:48.162130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:24.811 [2024-11-18 15:04:48.162137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.172248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.172280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.811 [2024-11-18 15:04:48.172291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.068 ms 00:18:24.811 [2024-11-18 15:04:48.172299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.172344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.172354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.811 [2024-11-18 15:04:48.172361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:24.811 [2024-11-18 15:04:48.172368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.172773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.172847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.811 [2024-11-18 15:04:48.172861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:18:24.811 [2024-11-18 15:04:48.172869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.172957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.172966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.811 [2024-11-18 15:04:48.172973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:24.811 [2024-11-18 15:04:48.172980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.179331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.179360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.811 [2024-11-18 15:04:48.179368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.335 ms 00:18:24.811 [2024-11-18 15:04:48.179376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.186784] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:24.811 [2024-11-18 15:04:48.189613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.189771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.811 [2024-11-18 15:04:48.189788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.179 ms 00:18:24.811 [2024-11-18 15:04:48.189796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.245996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.811 [2024-11-18 15:04:48.246055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:24.811 [2024-11-18 15:04:48.246069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.166 ms 00:18:24.811 [2024-11-18 15:04:48.246075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.811 [2024-11-18 15:04:48.246113] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:24.811 [2024-11-18 15:04:48.246123] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:28.114 [2024-11-18 15:04:51.081062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.081144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:28.114 [2024-11-18 15:04:51.081162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2834.925 ms 00:18:28.114 [2024-11-18 15:04:51.081171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.081404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.081417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:28.114 [2024-11-18 15:04:51.081430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:28.114 [2024-11-18 15:04:51.081439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.084672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.084884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:28.114 [2024-11-18 15:04:51.084908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:18:28.114 [2024-11-18 15:04:51.084917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.088190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.088218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:28.114 [2024-11-18 15:04:51.088230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:18:28.114 [2024-11-18 15:04:51.088238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.088518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.088557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:28.114 [2024-11-18 15:04:51.088629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:28.114 [2024-11-18 15:04:51.088653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.112305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.112438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:28.114 [2024-11-18 15:04:51.112499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.615 ms 00:18:28.114 [2024-11-18 15:04:51.112523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.117433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.117544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:28.114 [2024-11-18 15:04:51.117605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.872 ms 00:18:28.114 [2024-11-18 15:04:51.117628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.119020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.119127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:28.114 [2024-11-18 15:04:51.119221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:18:28.114 [2024-11-18 15:04:51.119245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.123005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.123116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:28.114 [2024-11-18 15:04:51.123170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:18:28.114 [2024-11-18 15:04:51.123192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.123235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.123344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:28.114 [2024-11-18 15:04:51.123377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:28.114 [2024-11-18 15:04:51.123398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.123483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.114 [2024-11-18 15:04:51.123508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:28.114 [2024-11-18 15:04:51.123532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:28.114 [2024-11-18 15:04:51.123551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.114 [2024-11-18 15:04:51.124587] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2981.112 ms, result 0 00:18:28.114 { 00:18:28.114 "name": "ftl0", 00:18:28.114 "uuid": "10589772-454a-40e2-9fb5-d6d6cd01fe37" 00:18:28.114 } 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:18:28.114 /dev/nbd0 00:18:28.114 15:04:51 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:18:28.114 15:04:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:28.114 15:04:51 -- common/autotest_common.sh@867 -- # local i 00:18:28.114 15:04:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:28.115 15:04:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:28.115 15:04:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:28.115 15:04:51 -- common/autotest_common.sh@871 -- # break 00:18:28.115 15:04:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:28.115 15:04:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:28.115 15:04:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:18:28.115 1+0 records in 00:18:28.115 1+0 records out 00:18:28.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360035 s, 11.4 MB/s 00:18:28.115 15:04:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:18:28.115 15:04:51 -- common/autotest_common.sh@884 -- # size=4096 00:18:28.115 15:04:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:18:28.115 15:04:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:28.115 15:04:51 -- common/autotest_common.sh@887 -- # return 0 00:18:28.115 15:04:51 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:18:28.115 [2024-11-18 15:04:51.612443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:28.115 [2024-11-18 15:04:51.612558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85268 ] 00:18:28.376 [2024-11-18 15:04:51.759911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.376 [2024-11-18 15:04:51.791152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.317  [2024-11-18T15:04:53.844Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-18T15:04:55.275Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-18T15:04:55.846Z] Copying: 588/1024 [MB] (196 MBps) [2024-11-18T15:04:56.782Z] Copying: 843/1024 [MB] (254 MBps) [2024-11-18T15:04:56.782Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:18:33.192 00:18:33.192 15:04:56 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:35.094 15:04:58 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:18:35.094 [2024-11-18 15:04:58.665878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:35.094 [2024-11-18 15:04:58.666165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85344 ] 00:18:35.353 [2024-11-18 15:04:58.821175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.353 [2024-11-18 15:04:58.849128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.729  [2024-11-18T15:05:01.253Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-18T15:05:02.188Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-18T15:05:03.120Z] Copying: 81/1024 [MB] (24 MBps) [2024-11-18T15:05:04.053Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-18T15:05:04.985Z] Copying: 141/1024 [MB] (32 MBps) [2024-11-18T15:05:05.918Z] Copying: 172/1024 [MB] (30 MBps) [2024-11-18T15:05:07.292Z] Copying: 202/1024 [MB] (30 MBps) [2024-11-18T15:05:08.225Z] Copying: 232/1024 [MB] (30 MBps) [2024-11-18T15:05:09.159Z] Copying: 263/1024 [MB] (30 MBps) [2024-11-18T15:05:10.091Z] Copying: 293/1024 [MB] (29 MBps) [2024-11-18T15:05:11.024Z] Copying: 324/1024 [MB] (31 MBps) [2024-11-18T15:05:11.958Z] Copying: 356/1024 [MB] (31 MBps) [2024-11-18T15:05:12.914Z] Copying: 386/1024 [MB] (30 MBps) [2024-11-18T15:05:14.288Z] Copying: 416/1024 [MB] (30 MBps) [2024-11-18T15:05:15.221Z] Copying: 446/1024 [MB] (29 MBps) [2024-11-18T15:05:16.155Z] Copying: 476/1024 [MB] (30 MBps) [2024-11-18T15:05:17.088Z] Copying: 507/1024 [MB] (30 MBps) [2024-11-18T15:05:18.021Z] Copying: 538/1024 [MB] (31 MBps) [2024-11-18T15:05:18.954Z] Copying: 570/1024 [MB] (31 MBps) [2024-11-18T15:05:20.324Z] Copying: 602/1024 [MB] (32 MBps) [2024-11-18T15:05:21.258Z] Copying: 633/1024 [MB] (31 MBps) [2024-11-18T15:05:22.191Z] Copying: 664/1024 [MB] (31 MBps) [2024-11-18T15:05:23.124Z] Copying: 695/1024 [MB] (30 MBps) [2024-11-18T15:05:24.058Z] Copying: 726/1024 [MB] (31 MBps) [2024-11-18T15:05:24.992Z] Copying: 757/1024 [MB] (30 MBps) [2024-11-18T15:05:25.925Z] Copying: 788/1024 [MB] (31 MBps) [2024-11-18T15:05:27.298Z] Copying: 821/1024 [MB] (32 MBps) [2024-11-18T15:05:28.231Z] Copying: 851/1024 [MB] (30 MBps) [2024-11-18T15:05:29.165Z] Copying: 882/1024 [MB] (30 MBps) [2024-11-18T15:05:30.098Z] Copying: 913/1024 [MB] (30 MBps) [2024-11-18T15:05:31.031Z] Copying: 943/1024 [MB] (29 MBps) [2024-11-18T15:05:31.965Z] Copying: 974/1024 [MB] (30 MBps) [2024-11-18T15:05:32.532Z] Copying: 1005/1024 [MB] (31 MBps) [2024-11-18T15:05:32.790Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:19:09.200 00:19:09.200 15:05:32 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:19:09.200 15:05:32 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:19:09.459 15:05:32 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:09.719 [2024-11-18 15:05:33.060882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.061095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:09.719 [2024-11-18 15:05:33.061163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:09.719 [2024-11-18 15:05:33.061185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.061221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:09.719 [2024-11-18 15:05:33.061658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.061696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:09.719 [2024-11-18 15:05:33.061714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:19:09.719 [2024-11-18 15:05:33.061732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.063515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.063610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:09.719 [2024-11-18 15:05:33.063659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:19:09.719 [2024-11-18 15:05:33.063677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.076091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.076188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:09.719 [2024-11-18 15:05:33.076236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.383 ms 00:19:09.719 [2024-11-18 15:05:33.076254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.080998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.081086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:09.719 [2024-11-18 15:05:33.081132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:19:09.719 [2024-11-18 15:05:33.081150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.082239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.082339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:09.719 [2024-11-18 15:05:33.082437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:19:09.719 [2024-11-18 15:05:33.082457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.086825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.086916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:09.719 [2024-11-18 15:05:33.086959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:19:09.719 [2024-11-18 15:05:33.086978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.087084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.087106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:09.719 [2024-11-18 15:05:33.087125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:09.719 [2024-11-18 15:05:33.087186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.089039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.089120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:09.719 [2024-11-18 15:05:33.089161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:19:09.719 [2024-11-18 15:05:33.089178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.090758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.090841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:09.719 [2024-11-18 15:05:33.090959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:19:09.719 [2024-11-18 15:05:33.090976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.719 [2024-11-18 15:05:33.092042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.719 [2024-11-18 15:05:33.092124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:09.719 [2024-11-18 15:05:33.092165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:19:09.719 [2024-11-18 15:05:33.092181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.720 [2024-11-18 15:05:33.093132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.720 [2024-11-18 15:05:33.093214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:09.720 [2024-11-18 15:05:33.093258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:19:09.720 [2024-11-18 15:05:33.093275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.720 [2024-11-18 15:05:33.093307] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:09.720 [2024-11-18 15:05:33.093431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.093903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.094974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.095956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:09.720 [2024-11-18 15:05:33.096191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:09.721 [2024-11-18 15:05:33.096306] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:09.721 [2024-11-18 15:05:33.096314] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10589772-454a-40e2-9fb5-d6d6cd01fe37 00:19:09.721 [2024-11-18 15:05:33.096331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:09.721 [2024-11-18 15:05:33.096340] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:09.721 [2024-11-18 15:05:33.096345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:09.721 [2024-11-18 15:05:33.096354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:09.721 [2024-11-18 15:05:33.096359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:09.721 [2024-11-18 15:05:33.096368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:09.721 [2024-11-18 15:05:33.096373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:09.721 [2024-11-18 15:05:33.096379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:09.721 [2024-11-18 15:05:33.096385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:09.721 [2024-11-18 15:05:33.096391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.721 [2024-11-18 15:05:33.096397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:09.721 [2024-11-18 15:05:33.096405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:19:09.721 [2024-11-18 15:05:33.096410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.097733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.721 [2024-11-18 15:05:33.097747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:09.721 [2024-11-18 15:05:33.097756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:19:09.721 [2024-11-18 15:05:33.097761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.097817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.721 [2024-11-18 15:05:33.097824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:09.721 [2024-11-18 15:05:33.097832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:09.721 [2024-11-18 15:05:33.097838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.102556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.102644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.721 [2024-11-18 15:05:33.102687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.102714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.102788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.102815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.721 [2024-11-18 15:05:33.102880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.102902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.102962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.102989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.721 [2024-11-18 15:05:33.103006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.103068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.103098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.103114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.721 [2024-11-18 15:05:33.103191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.103210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.111504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.111643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.721 [2024-11-18 15:05:33.111730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.111750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.114947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.721 [2024-11-18 15:05:33.115118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.115198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.721 [2024-11-18 15:05:33.115371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.115422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.721 [2024-11-18 15:05:33.115545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.115637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.721 [2024-11-18 15:05:33.115715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.115791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:09.721 [2024-11-18 15:05:33.115859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.115918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.115956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.721 [2024-11-18 15:05:33.115978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.115992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.116047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.721 [2024-11-18 15:05:33.116122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.721 [2024-11-18 15:05:33.116142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.721 [2024-11-18 15:05:33.116156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.721 [2024-11-18 15:05:33.116274] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.361 ms, result 0 00:19:09.721 true 00:19:09.721 15:05:33 -- ftl/dirty_shutdown.sh@83 -- # kill -9 85132 00:19:09.721 15:05:33 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid85132 00:19:09.721 15:05:33 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:19:09.721 [2024-11-18 15:05:33.189847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:09.721 [2024-11-18 15:05:33.190112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85713 ] 00:19:09.980 [2024-11-18 15:05:33.333404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.980 [2024-11-18 15:05:33.363658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.916  [2024-11-18T15:05:35.442Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-18T15:05:36.819Z] Copying: 519/1024 [MB] (259 MBps) [2024-11-18T15:05:37.386Z] Copying: 777/1024 [MB] (257 MBps) [2024-11-18T15:05:37.645Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:19:14.055 00:19:14.055 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 85132 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:19:14.055 15:05:37 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:14.055 [2024-11-18 15:05:37.580948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:14.055 [2024-11-18 15:05:37.581056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85761 ] 00:19:14.313 [2024-11-18 15:05:37.729961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.314 [2024-11-18 15:05:37.758859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.314 [2024-11-18 15:05:37.840657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.314 [2024-11-18 15:05:37.840717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.314 [2024-11-18 15:05:37.900390] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:14.314 [2024-11-18 15:05:37.900586] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:14.314 [2024-11-18 15:05:37.900717] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:14.574 [2024-11-18 15:05:38.079977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.080191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.574 [2024-11-18 15:05:38.080212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:14.574 [2024-11-18 15:05:38.080220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.080280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.080293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.574 [2024-11-18 15:05:38.080307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:14.574 [2024-11-18 15:05:38.080329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.080353] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.574 [2024-11-18 15:05:38.080578] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.574 [2024-11-18 15:05:38.080643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.080650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.574 [2024-11-18 15:05:38.080662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:14.574 [2024-11-18 15:05:38.080668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.081713] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:14.574 [2024-11-18 15:05:38.083843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.083877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:14.574 [2024-11-18 15:05:38.083886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:19:14.574 [2024-11-18 15:05:38.083894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.083945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.083955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:14.574 [2024-11-18 15:05:38.083962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:14.574 [2024-11-18 15:05:38.083974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.088476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.088505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.574 [2024-11-18 15:05:38.088518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:19:14.574 [2024-11-18 15:05:38.088529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.088593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.088601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.574 [2024-11-18 15:05:38.088609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:14.574 [2024-11-18 15:05:38.088621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.088665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.088674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.574 [2024-11-18 15:05:38.088681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:14.574 [2024-11-18 15:05:38.088688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.088711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.574 [2024-11-18 15:05:38.089981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.090007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.574 [2024-11-18 15:05:38.090020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:19:14.574 [2024-11-18 15:05:38.090029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.090059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.090071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.574 [2024-11-18 15:05:38.090079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:14.574 [2024-11-18 15:05:38.090089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.090115] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:14.574 [2024-11-18 15:05:38.090132] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:14.574 [2024-11-18 15:05:38.090164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:14.574 [2024-11-18 15:05:38.090184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:14.574 [2024-11-18 15:05:38.090256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:14.574 [2024-11-18 15:05:38.090266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.574 [2024-11-18 15:05:38.090275] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:14.574 [2024-11-18 15:05:38.090285] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.574 [2024-11-18 15:05:38.090297] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.574 [2024-11-18 15:05:38.090304] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:14.574 [2024-11-18 15:05:38.090311] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.574 [2024-11-18 15:05:38.090337] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:14.574 [2024-11-18 15:05:38.090345] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:14.574 [2024-11-18 15:05:38.090353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.090360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.574 [2024-11-18 15:05:38.090367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:14.574 [2024-11-18 15:05:38.090374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.090433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.574 [2024-11-18 15:05:38.090441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.574 [2024-11-18 15:05:38.090448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:14.574 [2024-11-18 15:05:38.090455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.574 [2024-11-18 15:05:38.090524] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.574 [2024-11-18 15:05:38.090533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.574 [2024-11-18 15:05:38.090541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.574 [2024-11-18 15:05:38.090548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.574 [2024-11-18 15:05:38.090555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.574 [2024-11-18 15:05:38.090566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.574 [2024-11-18 15:05:38.090573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:14.574 [2024-11-18 15:05:38.090579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.574 [2024-11-18 15:05:38.090585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:14.574 [2024-11-18 15:05:38.090598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.574 [2024-11-18 15:05:38.090605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.574 [2024-11-18 15:05:38.090611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:14.574 [2024-11-18 15:05:38.090618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.575 [2024-11-18 15:05:38.090625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.575 [2024-11-18 15:05:38.090633] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:14.575 [2024-11-18 15:05:38.090640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.575 [2024-11-18 15:05:38.090656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:14.575 [2024-11-18 15:05:38.090663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:14.575 [2024-11-18 15:05:38.090678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:14.575 [2024-11-18 15:05:38.090689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.575 [2024-11-18 15:05:38.090711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.575 [2024-11-18 15:05:38.090734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.575 [2024-11-18 15:05:38.090755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.575 [2024-11-18 15:05:38.090777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.575 [2024-11-18 15:05:38.090799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.575 [2024-11-18 15:05:38.090818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.575 [2024-11-18 15:05:38.090826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:14.575 [2024-11-18 15:05:38.090833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.575 [2024-11-18 15:05:38.090841] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.575 [2024-11-18 15:05:38.090849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.575 [2024-11-18 15:05:38.090857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.575 [2024-11-18 15:05:38.090877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.575 [2024-11-18 15:05:38.090884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.575 [2024-11-18 15:05:38.090892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.575 [2024-11-18 15:05:38.090900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.575 [2024-11-18 15:05:38.090907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.575 [2024-11-18 15:05:38.090914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.575 [2024-11-18 15:05:38.090922] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.575 [2024-11-18 15:05:38.090932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.575 [2024-11-18 15:05:38.090942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:14.575 [2024-11-18 15:05:38.090952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:14.575 [2024-11-18 15:05:38.090960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:14.575 [2024-11-18 15:05:38.090968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:14.575 [2024-11-18 15:05:38.090975] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:14.575 [2024-11-18 15:05:38.090983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:14.575 [2024-11-18 15:05:38.090991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:14.575 [2024-11-18 15:05:38.090998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:14.575 [2024-11-18 15:05:38.091006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:14.575 [2024-11-18 15:05:38.091014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:14.575 [2024-11-18 15:05:38.091022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:14.575 [2024-11-18 15:05:38.091031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:14.575 [2024-11-18 15:05:38.091039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:14.575 [2024-11-18 15:05:38.091047] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.575 [2024-11-18 15:05:38.091060] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.575 [2024-11-18 15:05:38.091068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.575 [2024-11-18 15:05:38.091075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.575 [2024-11-18 15:05:38.091083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.575 [2024-11-18 15:05:38.091090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.575 [2024-11-18 15:05:38.091097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.091107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.575 [2024-11-18 15:05:38.091115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:19:14.575 [2024-11-18 15:05:38.091125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.096961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.096996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.575 [2024-11-18 15:05:38.097005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.803 ms 00:19:14.575 [2024-11-18 15:05:38.097012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.097095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.097103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.575 [2024-11-18 15:05:38.097110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:14.575 [2024-11-18 15:05:38.097117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.115422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.115469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.575 [2024-11-18 15:05:38.115484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.273 ms 00:19:14.575 [2024-11-18 15:05:38.115498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.115549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.115561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.575 [2024-11-18 15:05:38.115571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.575 [2024-11-18 15:05:38.115580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.115953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.115971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.575 [2024-11-18 15:05:38.115982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:14.575 [2024-11-18 15:05:38.115996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.116135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.575 [2024-11-18 15:05:38.116150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.575 [2024-11-18 15:05:38.116161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:14.575 [2024-11-18 15:05:38.116171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.575 [2024-11-18 15:05:38.121638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.121668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.576 [2024-11-18 15:05:38.121677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.441 ms 00:19:14.576 [2024-11-18 15:05:38.121684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.576 [2024-11-18 15:05:38.123983] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:14.576 [2024-11-18 15:05:38.124119] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:14.576 [2024-11-18 15:05:38.124137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.124145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:14.576 [2024-11-18 15:05:38.124153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:19:14.576 [2024-11-18 15:05:38.124165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.576 [2024-11-18 15:05:38.138568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.138698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:14.576 [2024-11-18 15:05:38.138721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.370 ms 00:19:14.576 [2024-11-18 15:05:38.138734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.576 [2024-11-18 15:05:38.140216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.140246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:14.576 [2024-11-18 15:05:38.140255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:19:14.576 [2024-11-18 15:05:38.140261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.576 [2024-11-18 15:05:38.141713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.141741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:14.576 [2024-11-18 15:05:38.141749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:19:14.576 [2024-11-18 15:05:38.141756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.576 [2024-11-18 15:05:38.141943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.576 [2024-11-18 15:05:38.141957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.576 [2024-11-18 15:05:38.141965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:14.576 [2024-11-18 15:05:38.141976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.159753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.159795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:14.834 [2024-11-18 15:05:38.159806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:19:14.834 [2024-11-18 15:05:38.159814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.167119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:14.834 [2024-11-18 15:05:38.169519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.169549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.834 [2024-11-18 15:05:38.169559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.658 ms 00:19:14.834 [2024-11-18 15:05:38.169567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.169629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.169640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:14.834 [2024-11-18 15:05:38.169651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:14.834 [2024-11-18 15:05:38.169658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.169708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.169722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.834 [2024-11-18 15:05:38.169730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:14.834 [2024-11-18 15:05:38.169740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.170998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.171030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:14.834 [2024-11-18 15:05:38.171039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:19:14.834 [2024-11-18 15:05:38.171049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.171077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.171085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.834 [2024-11-18 15:05:38.171093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.834 [2024-11-18 15:05:38.171099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.171143] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:14.834 [2024-11-18 15:05:38.171153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.171160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:14.834 [2024-11-18 15:05:38.171168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:14.834 [2024-11-18 15:05:38.171175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.174252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.174283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.834 [2024-11-18 15:05:38.174292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:19:14.834 [2024-11-18 15:05:38.174304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.174377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.834 [2024-11-18 15:05:38.174387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.834 [2024-11-18 15:05:38.174400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:14.834 [2024-11-18 15:05:38.174407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.834 [2024-11-18 15:05:38.175240] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.881 ms, result 0 00:19:15.770  [2024-11-18T15:05:40.294Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-18T15:05:41.228Z] Copying: 94/1024 [MB] (47 MBps) [2024-11-18T15:05:42.602Z] Copying: 139/1024 [MB] (45 MBps) [2024-11-18T15:05:43.537Z] Copying: 185/1024 [MB] (45 MBps) [2024-11-18T15:05:44.472Z] Copying: 230/1024 [MB] (45 MBps) [2024-11-18T15:05:45.406Z] Copying: 276/1024 [MB] (46 MBps) [2024-11-18T15:05:46.340Z] Copying: 322/1024 [MB] (46 MBps) [2024-11-18T15:05:47.274Z] Copying: 370/1024 [MB] (47 MBps) [2024-11-18T15:05:48.207Z] Copying: 415/1024 [MB] (44 MBps) [2024-11-18T15:05:49.583Z] Copying: 462/1024 [MB] (47 MBps) [2024-11-18T15:05:50.518Z] Copying: 510/1024 [MB] (47 MBps) [2024-11-18T15:05:51.455Z] Copying: 556/1024 [MB] (46 MBps) [2024-11-18T15:05:52.389Z] Copying: 601/1024 [MB] (44 MBps) [2024-11-18T15:05:53.325Z] Copying: 645/1024 [MB] (44 MBps) [2024-11-18T15:05:54.262Z] Copying: 691/1024 [MB] (46 MBps) [2024-11-18T15:05:55.197Z] Copying: 728/1024 [MB] (37 MBps) [2024-11-18T15:05:56.572Z] Copying: 753/1024 [MB] (25 MBps) [2024-11-18T15:05:57.506Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-18T15:05:58.441Z] Copying: 794/1024 [MB] (24 MBps) [2024-11-18T15:05:59.399Z] Copying: 838/1024 [MB] (44 MBps) [2024-11-18T15:06:00.381Z] Copying: 884/1024 [MB] (45 MBps) [2024-11-18T15:06:01.318Z] Copying: 932/1024 [MB] (48 MBps) [2024-11-18T15:06:02.253Z] Copying: 980/1024 [MB] (47 MBps) [2024-11-18T15:06:03.191Z] Copying: 1023/1024 [MB] (43 MBps) [2024-11-18T15:06:03.191Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-18 15:06:03.052793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.052850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.601 [2024-11-18 15:06:03.052865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.601 [2024-11-18 15:06:03.052873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.055994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.601 [2024-11-18 15:06:03.059341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.059375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.601 [2024-11-18 15:06:03.059389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:19:39.601 [2024-11-18 15:06:03.059397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.069731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.069780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.601 [2024-11-18 15:06:03.069792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.440 ms 00:19:39.601 [2024-11-18 15:06:03.069800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.087893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.087924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.601 [2024-11-18 15:06:03.087935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.078 ms 00:19:39.601 [2024-11-18 15:06:03.087948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.094008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.094033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:39.601 [2024-11-18 15:06:03.094042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.034 ms 00:19:39.601 [2024-11-18 15:06:03.094049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.095235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.095265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.601 [2024-11-18 15:06:03.095274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:19:39.601 [2024-11-18 15:06:03.095281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.098946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.098977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.601 [2024-11-18 15:06:03.098994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:19:39.601 [2024-11-18 15:06:03.099001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.152148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.152197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.601 [2024-11-18 15:06:03.152212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.113 ms 00:19:39.601 [2024-11-18 15:06:03.152220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.153887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.154025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:39.601 [2024-11-18 15:06:03.154040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:19:39.601 [2024-11-18 15:06:03.154048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.155238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.155265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:39.601 [2024-11-18 15:06:03.155274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:19:39.601 [2024-11-18 15:06:03.155281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.156478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.156518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.601 [2024-11-18 15:06:03.156529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:19:39.601 [2024-11-18 15:06:03.156537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.157425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.601 [2024-11-18 15:06:03.157456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:39.601 [2024-11-18 15:06:03.157465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:19:39.601 [2024-11-18 15:06:03.157472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.601 [2024-11-18 15:06:03.157497] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:39.601 [2024-11-18 15:06:03.157510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:19:39.601 [2024-11-18 15:06:03.157520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:39.601 [2024-11-18 15:06:03.157792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.157998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:39.602 [2024-11-18 15:06:03.158261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:39.602 [2024-11-18 15:06:03.158269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10589772-454a-40e2-9fb5-d6d6cd01fe37 00:19:39.602 [2024-11-18 15:06:03.158279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:19:39.602 [2024-11-18 15:06:03.158286] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127424 00:19:39.602 [2024-11-18 15:06:03.158292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:19:39.602 [2024-11-18 15:06:03.158305] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:19:39.602 [2024-11-18 15:06:03.158312] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:39.602 [2024-11-18 15:06:03.158338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:39.602 [2024-11-18 15:06:03.158346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:39.602 [2024-11-18 15:06:03.158352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:39.602 [2024-11-18 15:06:03.158358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:39.602 [2024-11-18 15:06:03.158365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-18 15:06:03.158372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:39.602 [2024-11-18 15:06:03.158380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:19:39.602 [2024-11-18 15:06:03.158387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-18 15:06:03.159711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-18 15:06:03.159726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:39.602 [2024-11-18 15:06:03.159741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:19:39.602 [2024-11-18 15:06:03.159751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-18 15:06:03.159811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-18 15:06:03.159820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:39.602 [2024-11-18 15:06:03.159831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:39.602 [2024-11-18 15:06:03.159838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-18 15:06:03.164881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.602 [2024-11-18 15:06:03.164993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.602 [2024-11-18 15:06:03.165089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.602 [2024-11-18 15:06:03.165113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-18 15:06:03.165210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.602 [2024-11-18 15:06:03.165242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.602 [2024-11-18 15:06:03.165296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.165342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.165433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.165461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.603 [2024-11-18 15:06:03.165481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.165535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.165565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.165585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.603 [2024-11-18 15:06:03.165604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.165658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.173701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.173827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.603 [2024-11-18 15:06:03.173885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.173906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.177424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.177536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.603 [2024-11-18 15:06:03.177590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.177617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.177665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.177746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.603 [2024-11-18 15:06:03.177769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.177788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.177837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.177860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.603 [2024-11-18 15:06:03.177928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.177951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.178029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.178057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.603 [2024-11-18 15:06:03.178115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.178135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.178236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.178696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:39.603 [2024-11-18 15:06:03.178954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.179118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.179398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.179570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.603 [2024-11-18 15:06:03.179700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.179762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.179933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.603 [2024-11-18 15:06:03.180047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.603 [2024-11-18 15:06:03.180200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.603 [2024-11-18 15:06:03.180360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.603 [2024-11-18 15:06:03.180765] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 129.782 ms, result 0 00:19:40.538 00:19:40.538 00:19:40.538 15:06:03 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:19:43.072 15:06:06 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.072 [2024-11-18 15:06:06.116098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:43.072 [2024-11-18 15:06:06.116423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86058 ] 00:19:43.072 [2024-11-18 15:06:06.263493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.072 [2024-11-18 15:06:06.293593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.072 [2024-11-18 15:06:06.376437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.072 [2024-11-18 15:06:06.376506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.072 [2024-11-18 15:06:06.523101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.523154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.072 [2024-11-18 15:06:06.523168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.072 [2024-11-18 15:06:06.523175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.523225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.523235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.072 [2024-11-18 15:06:06.523246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:43.072 [2024-11-18 15:06:06.523256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.523274] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.072 [2024-11-18 15:06:06.523541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.072 [2024-11-18 15:06:06.523560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.523570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.072 [2024-11-18 15:06:06.523578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:43.072 [2024-11-18 15:06:06.523585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.524595] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.072 [2024-11-18 15:06:06.526876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.527024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.072 [2024-11-18 15:06:06.527045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:19:43.072 [2024-11-18 15:06:06.527053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.527100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.527111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.072 [2024-11-18 15:06:06.527123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:43.072 [2024-11-18 15:06:06.527134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.531787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.531828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.072 [2024-11-18 15:06:06.531841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:19:43.072 [2024-11-18 15:06:06.531848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.531920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.531931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.072 [2024-11-18 15:06:06.531939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:43.072 [2024-11-18 15:06:06.531949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.531988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.532000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.072 [2024-11-18 15:06:06.532010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.072 [2024-11-18 15:06:06.532017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.532039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.072 [2024-11-18 15:06:06.533353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.533380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.072 [2024-11-18 15:06:06.533388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:19:43.072 [2024-11-18 15:06:06.533401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.533430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.072 [2024-11-18 15:06:06.533439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.072 [2024-11-18 15:06:06.533449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:43.072 [2024-11-18 15:06:06.533456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.072 [2024-11-18 15:06:06.533479] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.072 [2024-11-18 15:06:06.533500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:43.072 [2024-11-18 15:06:06.533536] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.072 [2024-11-18 15:06:06.533550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:43.072 [2024-11-18 15:06:06.533622] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:43.072 [2024-11-18 15:06:06.533637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.072 [2024-11-18 15:06:06.533648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:43.072 [2024-11-18 15:06:06.533658] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.072 [2024-11-18 15:06:06.533666] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.072 [2024-11-18 15:06:06.533674] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.072 [2024-11-18 15:06:06.533681] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.072 [2024-11-18 15:06:06.533688] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:43.072 [2024-11-18 15:06:06.533694] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:43.073 [2024-11-18 15:06:06.533701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.533708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.073 [2024-11-18 15:06:06.533716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:43.073 [2024-11-18 15:06:06.533726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.533787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.533794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.073 [2024-11-18 15:06:06.533802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:43.073 [2024-11-18 15:06:06.533808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.533875] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.073 [2024-11-18 15:06:06.533884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.073 [2024-11-18 15:06:06.533892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.073 [2024-11-18 15:06:06.533899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.073 [2024-11-18 15:06:06.533909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.073 [2024-11-18 15:06:06.533915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.073 [2024-11-18 15:06:06.533921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.073 [2024-11-18 15:06:06.533928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.073 [2024-11-18 15:06:06.533935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.073 [2024-11-18 15:06:06.533941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.073 [2024-11-18 15:06:06.533947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.073 [2024-11-18 15:06:06.533956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.073 [2024-11-18 15:06:06.533964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.073 [2024-11-18 15:06:06.533976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.073 [2024-11-18 15:06:06.533983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:43.073 [2024-11-18 15:06:06.533989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.073 [2024-11-18 15:06:06.533996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.073 [2024-11-18 15:06:06.534002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:43.073 [2024-11-18 15:06:06.534010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:43.073 [2024-11-18 15:06:06.534024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:43.073 [2024-11-18 15:06:06.534032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.073 [2024-11-18 15:06:06.534047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.073 [2024-11-18 15:06:06.534069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.073 [2024-11-18 15:06:06.534093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.073 [2024-11-18 15:06:06.534115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.073 [2024-11-18 15:06:06.534136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.073 [2024-11-18 15:06:06.534150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.073 [2024-11-18 15:06:06.534157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:43.073 [2024-11-18 15:06:06.534164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.073 [2024-11-18 15:06:06.534171] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.073 [2024-11-18 15:06:06.534179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.073 [2024-11-18 15:06:06.534187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.073 [2024-11-18 15:06:06.534207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.073 [2024-11-18 15:06:06.534215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.073 [2024-11-18 15:06:06.534222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.073 [2024-11-18 15:06:06.534229] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.073 [2024-11-18 15:06:06.534236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.073 [2024-11-18 15:06:06.534244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.073 [2024-11-18 15:06:06.534253] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.073 [2024-11-18 15:06:06.534263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.073 [2024-11-18 15:06:06.534272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.073 [2024-11-18 15:06:06.534280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:43.073 [2024-11-18 15:06:06.534289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:43.073 [2024-11-18 15:06:06.534297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:43.073 [2024-11-18 15:06:06.534304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:43.073 [2024-11-18 15:06:06.534312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:43.073 [2024-11-18 15:06:06.534608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:43.073 [2024-11-18 15:06:06.534644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:43.073 [2024-11-18 15:06:06.534673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:43.073 [2024-11-18 15:06:06.534701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:43.073 [2024-11-18 15:06:06.534779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:43.073 [2024-11-18 15:06:06.534810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:43.073 [2024-11-18 15:06:06.534838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:43.073 [2024-11-18 15:06:06.534867] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.073 [2024-11-18 15:06:06.534902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.073 [2024-11-18 15:06:06.534980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.073 [2024-11-18 15:06:06.535049] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.073 [2024-11-18 15:06:06.535079] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.073 [2024-11-18 15:06:06.535108] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.073 [2024-11-18 15:06:06.535169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.535198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.073 [2024-11-18 15:06:06.535218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:19:43.073 [2024-11-18 15:06:06.535240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.541182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.541304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.073 [2024-11-18 15:06:06.541373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.888 ms 00:19:43.073 [2024-11-18 15:06:06.541620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.542080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.542134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:43.073 [2024-11-18 15:06:06.542162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:43.073 [2024-11-18 15:06:06.542199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.561671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.561725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.073 [2024-11-18 15:06:06.561742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:19:43.073 [2024-11-18 15:06:06.561753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.073 [2024-11-18 15:06:06.561800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.073 [2024-11-18 15:06:06.561813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.073 [2024-11-18 15:06:06.561834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:43.074 [2024-11-18 15:06:06.561848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.562231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.562252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.074 [2024-11-18 15:06:06.562266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:43.074 [2024-11-18 15:06:06.562277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.562492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.562509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.074 [2024-11-18 15:06:06.562523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:19:43.074 [2024-11-18 15:06:06.562536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.568714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.568764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.074 [2024-11-18 15:06:06.568778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.145 ms 00:19:43.074 [2024-11-18 15:06:06.568789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.571381] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:43.074 [2024-11-18 15:06:06.571415] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:43.074 [2024-11-18 15:06:06.571424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.571431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:43.074 [2024-11-18 15:06:06.571439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:19:43.074 [2024-11-18 15:06:06.571445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.585722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.585839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:43.074 [2024-11-18 15:06:06.585862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.242 ms 00:19:43.074 [2024-11-18 15:06:06.585869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.587537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.587568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:43.074 [2024-11-18 15:06:06.587577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:19:43.074 [2024-11-18 15:06:06.587583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.588974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.589075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:43.074 [2024-11-18 15:06:06.589088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:19:43.074 [2024-11-18 15:06:06.589096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.589280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.589290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:43.074 [2024-11-18 15:06:06.589298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:43.074 [2024-11-18 15:06:06.589307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.607268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.607311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.074 [2024-11-18 15:06:06.607345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.931 ms 00:19:43.074 [2024-11-18 15:06:06.607353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.614594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:43.074 [2024-11-18 15:06:06.616967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.616995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.074 [2024-11-18 15:06:06.617011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:19:43.074 [2024-11-18 15:06:06.617020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.617084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.617095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.074 [2024-11-18 15:06:06.617104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.074 [2024-11-18 15:06:06.617112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.618206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.618313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.074 [2024-11-18 15:06:06.618341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:19:43.074 [2024-11-18 15:06:06.618349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.619629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.619659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:43.074 [2024-11-18 15:06:06.619667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:19:43.074 [2024-11-18 15:06:06.619674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.619715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.619726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.074 [2024-11-18 15:06:06.619736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.074 [2024-11-18 15:06:06.619744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.619775] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.074 [2024-11-18 15:06:06.619784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.619791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.074 [2024-11-18 15:06:06.619799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:43.074 [2024-11-18 15:06:06.619806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.623070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.623101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.074 [2024-11-18 15:06:06.623110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:19:43.074 [2024-11-18 15:06:06.623122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.623182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.074 [2024-11-18 15:06:06.623193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.074 [2024-11-18 15:06:06.623204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:43.074 [2024-11-18 15:06:06.623215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.074 [2024-11-18 15:06:06.629109] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.651 ms, result 0 00:19:44.451  [2024-11-18T15:06:08.977Z] Copying: 992/1048576 [kB] (992 kBps) [2024-11-18T15:06:09.911Z] Copying: 5720/1048576 [kB] (4728 kBps) [2024-11-18T15:06:10.847Z] Copying: 62/1024 [MB] (56 MBps) [2024-11-18T15:06:12.222Z] Copying: 115/1024 [MB] (53 MBps) [2024-11-18T15:06:13.157Z] Copying: 168/1024 [MB] (52 MBps) [2024-11-18T15:06:14.093Z] Copying: 219/1024 [MB] (51 MBps) [2024-11-18T15:06:15.028Z] Copying: 271/1024 [MB] (51 MBps) [2024-11-18T15:06:15.963Z] Copying: 324/1024 [MB] (53 MBps) [2024-11-18T15:06:16.898Z] Copying: 377/1024 [MB] (53 MBps) [2024-11-18T15:06:17.834Z] Copying: 430/1024 [MB] (52 MBps) [2024-11-18T15:06:19.212Z] Copying: 481/1024 [MB] (51 MBps) [2024-11-18T15:06:20.156Z] Copying: 535/1024 [MB] (53 MBps) [2024-11-18T15:06:21.091Z] Copying: 590/1024 [MB] (54 MBps) [2024-11-18T15:06:22.027Z] Copying: 641/1024 [MB] (51 MBps) [2024-11-18T15:06:22.963Z] Copying: 696/1024 [MB] (54 MBps) [2024-11-18T15:06:23.899Z] Copying: 748/1024 [MB] (52 MBps) [2024-11-18T15:06:24.834Z] Copying: 801/1024 [MB] (53 MBps) [2024-11-18T15:06:26.261Z] Copying: 853/1024 [MB] (52 MBps) [2024-11-18T15:06:26.840Z] Copying: 905/1024 [MB] (51 MBps) [2024-11-18T15:06:28.215Z] Copying: 958/1024 [MB] (52 MBps) [2024-11-18T15:06:28.216Z] Copying: 1010/1024 [MB] (52 MBps) [2024-11-18T15:06:29.153Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-18 15:06:28.946366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.946422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.563 [2024-11-18 15:06:28.946435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:05.563 [2024-11-18 15:06:28.946443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.946465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.563 [2024-11-18 15:06:28.946912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.946927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.563 [2024-11-18 15:06:28.946936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:20:05.563 [2024-11-18 15:06:28.946949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.947169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.947183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.563 [2024-11-18 15:06:28.947193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:20:05.563 [2024-11-18 15:06:28.947201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.960356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.960541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.563 [2024-11-18 15:06:28.960558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.140 ms 00:20:05.563 [2024-11-18 15:06:28.960566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.966875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.966981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:05.563 [2024-11-18 15:06:28.967049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.273 ms 00:20:05.563 [2024-11-18 15:06:28.967072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.968349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.968451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.563 [2024-11-18 15:06:28.968504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:20:05.563 [2024-11-18 15:06:28.968525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.972287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.972417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.563 [2024-11-18 15:06:28.972477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:20:05.563 [2024-11-18 15:06:28.972500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.975613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.975708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.563 [2024-11-18 15:06:28.975763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:20:05.563 [2024-11-18 15:06:28.975785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.977335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.977362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.563 [2024-11-18 15:06:28.977371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:20:05.563 [2024-11-18 15:06:28.977378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.978755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.978887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.563 [2024-11-18 15:06:28.978903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:20:05.563 [2024-11-18 15:06:28.978911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.979776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.979814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.563 [2024-11-18 15:06:28.979823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:20:05.563 [2024-11-18 15:06:28.979831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.980658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.563 [2024-11-18 15:06:28.980686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.563 [2024-11-18 15:06:28.980694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:20:05.563 [2024-11-18 15:06:28.980700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.563 [2024-11-18 15:06:28.980725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.563 [2024-11-18 15:06:28.980746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:05.563 [2024-11-18 15:06:28.980759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:20:05.563 [2024-11-18 15:06:28.980767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.980999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.564 [2024-11-18 15:06:28.981276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.565 [2024-11-18 15:06:28.981497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.565 [2024-11-18 15:06:28.981505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10589772-454a-40e2-9fb5-d6d6cd01fe37 00:20:05.565 [2024-11-18 15:06:28.981512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:20:05.565 [2024-11-18 15:06:28.981519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 140480 00:20:05.565 [2024-11-18 15:06:28.981530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 138496 00:20:05.565 [2024-11-18 15:06:28.981541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0143 00:20:05.565 [2024-11-18 15:06:28.981551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.565 [2024-11-18 15:06:28.981558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.565 [2024-11-18 15:06:28.981565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.565 [2024-11-18 15:06:28.981571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.565 [2024-11-18 15:06:28.981577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.565 [2024-11-18 15:06:28.981584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.565 [2024-11-18 15:06:28.981590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.565 [2024-11-18 15:06:28.981598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:20:05.565 [2024-11-18 15:06:28.981605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.982924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.565 [2024-11-18 15:06:28.982944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.565 [2024-11-18 15:06:28.982953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:20:05.565 [2024-11-18 15:06:28.982960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.983010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.565 [2024-11-18 15:06:28.983023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.565 [2024-11-18 15:06:28.983031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:05.565 [2024-11-18 15:06:28.983038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.987829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:28.987864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.565 [2024-11-18 15:06:28.987873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:28.987880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.987931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:28.987938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.565 [2024-11-18 15:06:28.987946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:28.987953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.988022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:28.988036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.565 [2024-11-18 15:06:28.988043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:28.988050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.988065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:28.988072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.565 [2024-11-18 15:06:28.988079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:28.988086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:28.996224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:28.996263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.565 [2024-11-18 15:06:28.996273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:28.996281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:29.001866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:29.001900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.565 [2024-11-18 15:06:29.001909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:29.001916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:29.001953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:29.001967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.565 [2024-11-18 15:06:29.001974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:29.001981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:29.002022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:29.002031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.565 [2024-11-18 15:06:29.002038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:29.002045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:29.002106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.565 [2024-11-18 15:06:29.002115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.565 [2024-11-18 15:06:29.002125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.565 [2024-11-18 15:06:29.002132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.565 [2024-11-18 15:06:29.002157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.566 [2024-11-18 15:06:29.002165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.566 [2024-11-18 15:06:29.002172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.566 [2024-11-18 15:06:29.002179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.566 [2024-11-18 15:06:29.002218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.566 [2024-11-18 15:06:29.002230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.566 [2024-11-18 15:06:29.002239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.566 [2024-11-18 15:06:29.002246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.566 [2024-11-18 15:06:29.002286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.566 [2024-11-18 15:06:29.002295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.566 [2024-11-18 15:06:29.002303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.566 [2024-11-18 15:06:29.002310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.566 [2024-11-18 15:06:29.002430] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.037 ms, result 0 00:20:06.131 00:20:06.131 00:20:06.131 15:06:29 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:08.664 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:08.664 15:06:31 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:08.664 [2024-11-18 15:06:31.771141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:08.664 [2024-11-18 15:06:31.771241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86332 ] 00:20:08.664 [2024-11-18 15:06:31.919919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.664 [2024-11-18 15:06:31.950100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.664 [2024-11-18 15:06:32.035182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.664 [2024-11-18 15:06:32.035469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.664 [2024-11-18 15:06:32.181686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.181913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:08.664 [2024-11-18 15:06:32.181984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.664 [2024-11-18 15:06:32.182009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.182080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.182106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.664 [2024-11-18 15:06:32.182125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:08.664 [2024-11-18 15:06:32.182147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.182178] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:08.664 [2024-11-18 15:06:32.182682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:08.664 [2024-11-18 15:06:32.182841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.182916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.664 [2024-11-18 15:06:32.182942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:20:08.664 [2024-11-18 15:06:32.182990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.184208] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:08.664 [2024-11-18 15:06:32.186671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.186802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:08.664 [2024-11-18 15:06:32.186884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:20:08.664 [2024-11-18 15:06:32.186907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.186964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.187065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:08.664 [2024-11-18 15:06:32.187089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:08.664 [2024-11-18 15:06:32.187107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.191736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.191841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.664 [2024-11-18 15:06:32.191891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:20:08.664 [2024-11-18 15:06:32.191912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.191991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.192032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.664 [2024-11-18 15:06:32.192052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:08.664 [2024-11-18 15:06:32.192099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.192152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.192176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:08.664 [2024-11-18 15:06:32.192199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:08.664 [2024-11-18 15:06:32.192219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.192257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.664 [2024-11-18 15:06:32.193690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.193717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.664 [2024-11-18 15:06:32.193726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:20:08.664 [2024-11-18 15:06:32.193733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.193768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.193777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:08.664 [2024-11-18 15:06:32.193791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:08.664 [2024-11-18 15:06:32.193798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.193815] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:08.664 [2024-11-18 15:06:32.193833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:08.664 [2024-11-18 15:06:32.193869] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:08.664 [2024-11-18 15:06:32.193883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:08.664 [2024-11-18 15:06:32.193957] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:08.664 [2024-11-18 15:06:32.193969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:08.664 [2024-11-18 15:06:32.193982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:08.664 [2024-11-18 15:06:32.193995] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194003] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194011] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:08.664 [2024-11-18 15:06:32.194018] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:08.664 [2024-11-18 15:06:32.194025] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:08.664 [2024-11-18 15:06:32.194031] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:08.664 [2024-11-18 15:06:32.194039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.194046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:08.664 [2024-11-18 15:06:32.194053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:08.664 [2024-11-18 15:06:32.194062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.194122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.664 [2024-11-18 15:06:32.194129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:08.664 [2024-11-18 15:06:32.194136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:08.664 [2024-11-18 15:06:32.194145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.664 [2024-11-18 15:06:32.194213] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:08.664 [2024-11-18 15:06:32.194222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:08.664 [2024-11-18 15:06:32.194229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:08.664 [2024-11-18 15:06:32.194258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:08.664 [2024-11-18 15:06:32.194278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.664 [2024-11-18 15:06:32.194290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:08.664 [2024-11-18 15:06:32.194297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:08.664 [2024-11-18 15:06:32.194303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.664 [2024-11-18 15:06:32.194325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:08.664 [2024-11-18 15:06:32.194332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:08.664 [2024-11-18 15:06:32.194341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:08.664 [2024-11-18 15:06:32.194356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:08.664 [2024-11-18 15:06:32.194363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:08.664 [2024-11-18 15:06:32.194380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:08.664 [2024-11-18 15:06:32.194388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:08.664 [2024-11-18 15:06:32.194403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:08.664 [2024-11-18 15:06:32.194411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.664 [2024-11-18 15:06:32.194418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:08.665 [2024-11-18 15:06:32.194425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:08.665 [2024-11-18 15:06:32.194432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.665 [2024-11-18 15:06:32.194440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:08.665 [2024-11-18 15:06:32.194447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:08.665 [2024-11-18 15:06:32.194454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.665 [2024-11-18 15:06:32.194465] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:08.665 [2024-11-18 15:06:32.194472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:08.665 [2024-11-18 15:06:32.194479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:08.665 [2024-11-18 15:06:32.194487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:08.665 [2024-11-18 15:06:32.194494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:08.665 [2024-11-18 15:06:32.194501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.665 [2024-11-18 15:06:32.194509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:08.665 [2024-11-18 15:06:32.194516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:08.665 [2024-11-18 15:06:32.194523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.665 [2024-11-18 15:06:32.194530] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:08.665 [2024-11-18 15:06:32.194538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:08.665 [2024-11-18 15:06:32.194546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.665 [2024-11-18 15:06:32.194554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.665 [2024-11-18 15:06:32.194562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:08.665 [2024-11-18 15:06:32.194570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:08.665 [2024-11-18 15:06:32.194577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:08.665 [2024-11-18 15:06:32.194586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:08.665 [2024-11-18 15:06:32.194593] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:08.665 [2024-11-18 15:06:32.194601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:08.665 [2024-11-18 15:06:32.194608] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:08.665 [2024-11-18 15:06:32.194620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.665 [2024-11-18 15:06:32.194629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:08.665 [2024-11-18 15:06:32.194637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:08.665 [2024-11-18 15:06:32.194645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:08.665 [2024-11-18 15:06:32.194653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:08.665 [2024-11-18 15:06:32.194661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:08.665 [2024-11-18 15:06:32.194669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:08.665 [2024-11-18 15:06:32.194677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:08.665 [2024-11-18 15:06:32.194684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:08.665 [2024-11-18 15:06:32.194692] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:08.665 [2024-11-18 15:06:32.194700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:08.665 [2024-11-18 15:06:32.194717] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:08.665 [2024-11-18 15:06:32.194727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:08.665 [2024-11-18 15:06:32.194735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:08.665 [2024-11-18 15:06:32.194741] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:08.665 [2024-11-18 15:06:32.194749] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.665 [2024-11-18 15:06:32.194757] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:08.665 [2024-11-18 15:06:32.194764] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:08.665 [2024-11-18 15:06:32.194771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:08.665 [2024-11-18 15:06:32.194778] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:08.665 [2024-11-18 15:06:32.194785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.194792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:08.665 [2024-11-18 15:06:32.194800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:20:08.665 [2024-11-18 15:06:32.194809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.200673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.200791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.665 [2024-11-18 15:06:32.200805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.826 ms 00:20:08.665 [2024-11-18 15:06:32.200813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.200893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.200901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.665 [2024-11-18 15:06:32.200909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:08.665 [2024-11-18 15:06:32.200916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.218449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.218498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.665 [2024-11-18 15:06:32.218515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.480 ms 00:20:08.665 [2024-11-18 15:06:32.218525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.218571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.218583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.665 [2024-11-18 15:06:32.218593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.665 [2024-11-18 15:06:32.218605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.218987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.219005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.665 [2024-11-18 15:06:32.219016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:08.665 [2024-11-18 15:06:32.219026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.219165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.219177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.665 [2024-11-18 15:06:32.219188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:08.665 [2024-11-18 15:06:32.219198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.224824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.224861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.665 [2024-11-18 15:06:32.224871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.594 ms 00:20:08.665 [2024-11-18 15:06:32.224878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.227355] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:08.665 [2024-11-18 15:06:32.227388] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:08.665 [2024-11-18 15:06:32.227398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.227406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:08.665 [2024-11-18 15:06:32.227414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:20:08.665 [2024-11-18 15:06:32.227421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.242052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.242103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:08.665 [2024-11-18 15:06:32.242115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:20:08.665 [2024-11-18 15:06:32.242124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.244055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.244167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:08.665 [2024-11-18 15:06:32.244222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.880 ms 00:20:08.665 [2024-11-18 15:06:32.244245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.245688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.245791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:08.665 [2024-11-18 15:06:32.245804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:20:08.665 [2024-11-18 15:06:32.245811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.665 [2024-11-18 15:06:32.246001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.665 [2024-11-18 15:06:32.246012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:08.665 [2024-11-18 15:06:32.246020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:20:08.665 [2024-11-18 15:06:32.246033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.263985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.264027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:08.924 [2024-11-18 15:06:32.264038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.937 ms 00:20:08.924 [2024-11-18 15:06:32.264046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.271426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:08.924 [2024-11-18 15:06:32.273900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.273929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:08.924 [2024-11-18 15:06:32.273939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:20:08.924 [2024-11-18 15:06:32.273948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.274010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.274020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:08.924 [2024-11-18 15:06:32.274041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:08.924 [2024-11-18 15:06:32.274049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.274610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.274670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:08.924 [2024-11-18 15:06:32.274681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:08.924 [2024-11-18 15:06:32.274689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.275890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.275915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:08.924 [2024-11-18 15:06:32.275924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:20:08.924 [2024-11-18 15:06:32.275930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.924 [2024-11-18 15:06:32.275974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.924 [2024-11-18 15:06:32.275982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:08.925 [2024-11-18 15:06:32.275992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.925 [2024-11-18 15:06:32.275999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.925 [2024-11-18 15:06:32.276030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:08.925 [2024-11-18 15:06:32.276041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.925 [2024-11-18 15:06:32.276052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:08.925 [2024-11-18 15:06:32.276059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.925 [2024-11-18 15:06:32.276066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.925 [2024-11-18 15:06:32.279755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.925 [2024-11-18 15:06:32.279797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:08.925 [2024-11-18 15:06:32.279808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:20:08.925 [2024-11-18 15:06:32.279822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.925 [2024-11-18 15:06:32.279887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.925 [2024-11-18 15:06:32.279897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:08.925 [2024-11-18 15:06:32.279908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:08.925 [2024-11-18 15:06:32.279916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.925 [2024-11-18 15:06:32.280813] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.749 ms, result 0 00:20:10.301  [2024-11-18T15:06:34.459Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-18T15:06:35.835Z] Copying: 94/1024 [MB] (47 MBps) [2024-11-18T15:06:36.771Z] Copying: 141/1024 [MB] (46 MBps) [2024-11-18T15:06:37.706Z] Copying: 187/1024 [MB] (46 MBps) [2024-11-18T15:06:38.641Z] Copying: 236/1024 [MB] (49 MBps) [2024-11-18T15:06:39.576Z] Copying: 283/1024 [MB] (47 MBps) [2024-11-18T15:06:40.511Z] Copying: 329/1024 [MB] (46 MBps) [2024-11-18T15:06:41.887Z] Copying: 375/1024 [MB] (45 MBps) [2024-11-18T15:06:42.453Z] Copying: 424/1024 [MB] (49 MBps) [2024-11-18T15:06:43.828Z] Copying: 470/1024 [MB] (45 MBps) [2024-11-18T15:06:44.762Z] Copying: 512/1024 [MB] (42 MBps) [2024-11-18T15:06:45.695Z] Copying: 561/1024 [MB] (49 MBps) [2024-11-18T15:06:46.629Z] Copying: 608/1024 [MB] (47 MBps) [2024-11-18T15:06:47.563Z] Copying: 654/1024 [MB] (45 MBps) [2024-11-18T15:06:48.498Z] Copying: 703/1024 [MB] (49 MBps) [2024-11-18T15:06:49.871Z] Copying: 749/1024 [MB] (46 MBps) [2024-11-18T15:06:50.527Z] Copying: 793/1024 [MB] (43 MBps) [2024-11-18T15:06:51.463Z] Copying: 839/1024 [MB] (45 MBps) [2024-11-18T15:06:52.840Z] Copying: 884/1024 [MB] (45 MBps) [2024-11-18T15:06:53.784Z] Copying: 929/1024 [MB] (44 MBps) [2024-11-18T15:06:54.727Z] Copying: 958/1024 [MB] (28 MBps) [2024-11-18T15:06:55.670Z] Copying: 979/1024 [MB] (20 MBps) [2024-11-18T15:06:56.240Z] Copying: 1006/1024 [MB] (27 MBps) [2024-11-18T15:06:56.515Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-18 15:06:56.381555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.925 [2024-11-18 15:06:56.381933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.925 [2024-11-18 15:06:56.382104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:32.925 [2024-11-18 15:06:56.382171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.925 [2024-11-18 15:06:56.382253] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.925 [2024-11-18 15:06:56.383754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.925 [2024-11-18 15:06:56.383865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.925 [2024-11-18 15:06:56.383917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:20:32.925 [2024-11-18 15:06:56.383940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.925 [2024-11-18 15:06:56.384187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.925 [2024-11-18 15:06:56.384701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.925 [2024-11-18 15:06:56.384792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:32.925 [2024-11-18 15:06:56.384820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.925 [2024-11-18 15:06:56.388543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.388641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.926 [2024-11-18 15:06:56.388693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:20:32.926 [2024-11-18 15:06:56.388716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.395358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.395455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:32.926 [2024-11-18 15:06:56.395505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.611 ms 00:20:32.926 [2024-11-18 15:06:56.395526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.397853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.397950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.926 [2024-11-18 15:06:56.397995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:20:32.926 [2024-11-18 15:06:56.398016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.402031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.402140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.926 [2024-11-18 15:06:56.402187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:20:32.926 [2024-11-18 15:06:56.402217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.410682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.410810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.926 [2024-11-18 15:06:56.410828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.426 ms 00:20:32.926 [2024-11-18 15:06:56.410837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.413406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.413436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:32.926 [2024-11-18 15:06:56.413444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:20:32.926 [2024-11-18 15:06:56.413451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.415444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.415472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:32.926 [2024-11-18 15:06:56.415481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:20:32.926 [2024-11-18 15:06:56.415488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.417066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.417173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.926 [2024-11-18 15:06:56.417186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:20:32.926 [2024-11-18 15:06:56.417194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.926 [2024-11-18 15:06:56.419887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.926 [2024-11-18 15:06:56.419992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.927 [2024-11-18 15:06:56.420027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:20:32.927 [2024-11-18 15:06:56.420050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.927 [2024-11-18 15:06:56.420135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.927 [2024-11-18 15:06:56.420175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:32.927 [2024-11-18 15:06:56.420226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:20:32.927 [2024-11-18 15:06:56.420251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.420985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.927 [2024-11-18 15:06:56.421760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.421991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.422996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.928 [2024-11-18 15:06:56.423228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.928 [2024-11-18 15:06:56.423263] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10589772-454a-40e2-9fb5-d6d6cd01fe37 00:20:32.928 [2024-11-18 15:06:56.423288] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:20:32.928 [2024-11-18 15:06:56.423309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.928 [2024-11-18 15:06:56.423351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.928 [2024-11-18 15:06:56.423374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.928 [2024-11-18 15:06:56.423395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.928 [2024-11-18 15:06:56.423419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.928 [2024-11-18 15:06:56.423441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.928 [2024-11-18 15:06:56.423460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.928 [2024-11-18 15:06:56.423482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.928 [2024-11-18 15:06:56.423504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.928 [2024-11-18 15:06:56.423528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.928 [2024-11-18 15:06:56.423552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:20:32.928 [2024-11-18 15:06:56.423585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.928 [2024-11-18 15:06:56.426558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.928 [2024-11-18 15:06:56.426628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.928 [2024-11-18 15:06:56.426671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:20:32.928 [2024-11-18 15:06:56.426694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.928 [2024-11-18 15:06:56.426841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.928 [2024-11-18 15:06:56.426882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.928 [2024-11-18 15:06:56.426907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:32.928 [2024-11-18 15:06:56.426928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.928 [2024-11-18 15:06:56.435263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.435300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.929 [2024-11-18 15:06:56.435331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.435340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.435389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.435405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.929 [2024-11-18 15:06:56.435413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.435421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.435474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.435488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.929 [2024-11-18 15:06:56.435496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.435503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.435519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.435526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.929 [2024-11-18 15:06:56.435540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.435547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.446490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.446526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.929 [2024-11-18 15:06:56.446538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.446547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.929 [2024-11-18 15:06:56.451171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.929 [2024-11-18 15:06:56.451251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.929 [2024-11-18 15:06:56.451306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.929 [2024-11-18 15:06:56.451431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.929 [2024-11-18 15:06:56.451484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.929 [2024-11-18 15:06:56.451599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.929 [2024-11-18 15:06:56.451670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.929 [2024-11-18 15:06:56.451678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.929 [2024-11-18 15:06:56.451687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.929 [2024-11-18 15:06:56.451810] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.251 ms, result 0 00:20:33.189 00:20:33.189 00:20:33.190 15:06:56 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:20:35.737 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:20:35.737 15:06:58 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:20:35.737 15:06:58 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:20:35.737 15:06:58 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.737 15:06:58 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:35.737 15:06:58 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:20:35.737 15:06:59 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.737 15:06:59 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:20:35.737 Process with pid 85132 is not found 00:20:35.737 15:06:59 -- ftl/dirty_shutdown.sh@37 -- # killprocess 85132 00:20:35.737 15:06:59 -- common/autotest_common.sh@936 -- # '[' -z 85132 ']' 00:20:35.737 15:06:59 -- common/autotest_common.sh@940 -- # kill -0 85132 00:20:35.737 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85132) - No such process 00:20:35.737 15:06:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85132 is not found' 00:20:35.737 15:06:59 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:20:35.998 Remove shared memory files 00:20:35.998 15:06:59 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:20:35.998 15:06:59 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:35.998 15:06:59 -- ftl/common.sh@205 -- # rm -f rm -f 00:20:35.998 15:06:59 -- ftl/common.sh@206 -- # rm -f rm -f 00:20:35.998 15:06:59 -- ftl/common.sh@207 -- # rm -f rm -f 00:20:35.998 15:06:59 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:35.998 15:06:59 -- ftl/common.sh@209 -- # rm -f rm -f 00:20:35.998 ************************************ 00:20:35.998 END TEST ftl_dirty_shutdown 00:20:35.998 ************************************ 00:20:35.998 00:20:35.998 real 2m15.047s 00:20:35.998 user 2m29.411s 00:20:35.998 sys 0m23.254s 00:20:35.998 15:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:35.998 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:20:35.998 15:06:59 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:20:35.998 15:06:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:35.998 15:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.998 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:20:35.998 ************************************ 00:20:35.998 START TEST ftl_upgrade_shutdown 00:20:35.998 ************************************ 00:20:35.998 15:06:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:20:35.998 * Looking for test storage... 00:20:35.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.998 15:06:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:35.998 15:06:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:35.998 15:06:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:35.998 15:06:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:35.998 15:06:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:35.998 15:06:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:35.998 15:06:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:35.998 15:06:59 -- scripts/common.sh@335 -- # IFS=.-: 00:20:35.998 15:06:59 -- scripts/common.sh@335 -- # read -ra ver1 00:20:35.998 15:06:59 -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.998 15:06:59 -- scripts/common.sh@336 -- # read -ra ver2 00:20:35.998 15:06:59 -- scripts/common.sh@337 -- # local 'op=<' 00:20:35.998 15:06:59 -- scripts/common.sh@339 -- # ver1_l=2 00:20:35.998 15:06:59 -- scripts/common.sh@340 -- # ver2_l=1 00:20:35.998 15:06:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:35.998 15:06:59 -- scripts/common.sh@343 -- # case "$op" in 00:20:35.998 15:06:59 -- scripts/common.sh@344 -- # : 1 00:20:35.998 15:06:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:35.998 15:06:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.998 15:06:59 -- scripts/common.sh@364 -- # decimal 1 00:20:35.998 15:06:59 -- scripts/common.sh@352 -- # local d=1 00:20:35.998 15:06:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.998 15:06:59 -- scripts/common.sh@354 -- # echo 1 00:20:35.998 15:06:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:35.998 15:06:59 -- scripts/common.sh@365 -- # decimal 2 00:20:35.998 15:06:59 -- scripts/common.sh@352 -- # local d=2 00:20:35.998 15:06:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.998 15:06:59 -- scripts/common.sh@354 -- # echo 2 00:20:35.998 15:06:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:35.998 15:06:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:35.998 15:06:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:35.998 15:06:59 -- scripts/common.sh@367 -- # return 0 00:20:35.999 15:06:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.999 15:06:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:35.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.999 --rc genhtml_branch_coverage=1 00:20:35.999 --rc genhtml_function_coverage=1 00:20:35.999 --rc genhtml_legend=1 00:20:35.999 --rc geninfo_all_blocks=1 00:20:35.999 --rc geninfo_unexecuted_blocks=1 00:20:35.999 00:20:35.999 ' 00:20:35.999 15:06:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:35.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.999 --rc genhtml_branch_coverage=1 00:20:35.999 --rc genhtml_function_coverage=1 00:20:35.999 --rc genhtml_legend=1 00:20:35.999 --rc geninfo_all_blocks=1 00:20:35.999 --rc geninfo_unexecuted_blocks=1 00:20:35.999 00:20:35.999 ' 00:20:35.999 15:06:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:35.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.999 --rc genhtml_branch_coverage=1 00:20:35.999 --rc genhtml_function_coverage=1 00:20:35.999 --rc genhtml_legend=1 00:20:35.999 --rc geninfo_all_blocks=1 00:20:35.999 --rc geninfo_unexecuted_blocks=1 00:20:35.999 00:20:35.999 ' 00:20:35.999 15:06:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:35.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.999 --rc genhtml_branch_coverage=1 00:20:35.999 --rc genhtml_function_coverage=1 00:20:35.999 --rc genhtml_legend=1 00:20:35.999 --rc geninfo_all_blocks=1 00:20:35.999 --rc geninfo_unexecuted_blocks=1 00:20:35.999 00:20:35.999 ' 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:35.999 15:06:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:20:35.999 15:06:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.999 15:06:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.999 15:06:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:35.999 15:06:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:35.999 15:06:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.999 15:06:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:35.999 15:06:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:35.999 15:06:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.999 15:06:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.999 15:06:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:35.999 15:06:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:35.999 15:06:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.999 15:06:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.999 15:06:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:35.999 15:06:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:35.999 15:06:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.999 15:06:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.999 15:06:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:35.999 15:06:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:35.999 15:06:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.999 15:06:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.999 15:06:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.999 15:06:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.999 15:06:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:35.999 15:06:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:35.999 15:06:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.999 15:06:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:20:35.999 15:06:59 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:20:35.999 15:06:59 -- ftl/common.sh@81 -- # local base_bdev= 00:20:35.999 15:06:59 -- ftl/common.sh@82 -- # local cache_bdev= 00:20:35.999 15:06:59 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:35.999 15:06:59 -- ftl/common.sh@89 -- # spdk_tgt_pid=86686 00:20:35.999 15:06:59 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:20:36.261 15:06:59 -- ftl/common.sh@91 -- # waitforlisten 86686 00:20:36.261 15:06:59 -- common/autotest_common.sh@829 -- # '[' -z 86686 ']' 00:20:36.261 15:06:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.261 15:06:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.261 15:06:59 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:20:36.261 15:06:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.261 15:06:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.261 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:20:36.261 [2024-11-18 15:06:59.656055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:36.261 [2024-11-18 15:06:59.656371] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86686 ] 00:20:36.261 [2024-11-18 15:06:59.804445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.261 [2024-11-18 15:06:59.845910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:36.261 [2024-11-18 15:06:59.846276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.206 15:07:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.206 15:07:00 -- common/autotest_common.sh@862 -- # return 0 00:20:37.206 15:07:00 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:37.206 15:07:00 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:20:37.206 15:07:00 -- ftl/common.sh@99 -- # local params 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:20:37.206 15:07:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:20:37.206 15:07:00 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:20:37.206 15:07:00 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:20:37.206 15:07:00 -- ftl/common.sh@54 -- # local name=base 00:20:37.206 15:07:00 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:20:37.206 15:07:00 -- ftl/common.sh@56 -- # local size=20480 00:20:37.206 15:07:00 -- ftl/common.sh@59 -- # local base_bdev 00:20:37.206 15:07:00 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:20:37.206 15:07:00 -- ftl/common.sh@60 -- # base_bdev=basen1 00:20:37.206 15:07:00 -- ftl/common.sh@62 -- # local base_size 00:20:37.206 15:07:00 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:20:37.206 15:07:00 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:20:37.206 15:07:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:37.206 15:07:00 -- common/autotest_common.sh@1369 -- # local bs 00:20:37.206 15:07:00 -- common/autotest_common.sh@1370 -- # local nb 00:20:37.206 15:07:00 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:20:37.467 15:07:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:37.467 { 00:20:37.467 "name": "basen1", 00:20:37.467 "aliases": [ 00:20:37.467 "62db1a0b-14b4-4178-a2f4-fa615329b4bf" 00:20:37.467 ], 00:20:37.467 "product_name": "NVMe disk", 00:20:37.467 "block_size": 4096, 00:20:37.467 "num_blocks": 1310720, 00:20:37.467 "uuid": "62db1a0b-14b4-4178-a2f4-fa615329b4bf", 00:20:37.467 "assigned_rate_limits": { 00:20:37.467 "rw_ios_per_sec": 0, 00:20:37.467 "rw_mbytes_per_sec": 0, 00:20:37.467 "r_mbytes_per_sec": 0, 00:20:37.467 "w_mbytes_per_sec": 0 00:20:37.467 }, 00:20:37.467 "claimed": true, 00:20:37.467 "claim_type": "read_many_write_one", 00:20:37.467 "zoned": false, 00:20:37.467 "supported_io_types": { 00:20:37.467 "read": true, 00:20:37.467 "write": true, 00:20:37.467 "unmap": true, 00:20:37.467 "write_zeroes": true, 00:20:37.467 "flush": true, 00:20:37.467 "reset": true, 00:20:37.467 "compare": true, 00:20:37.467 "compare_and_write": false, 00:20:37.467 "abort": true, 00:20:37.467 "nvme_admin": true, 00:20:37.467 "nvme_io": true 00:20:37.467 }, 00:20:37.467 "driver_specific": { 00:20:37.467 "nvme": [ 00:20:37.467 { 00:20:37.467 "pci_address": "0000:00:07.0", 00:20:37.467 "trid": { 00:20:37.467 "trtype": "PCIe", 00:20:37.467 "traddr": "0000:00:07.0" 00:20:37.467 }, 00:20:37.467 "ctrlr_data": { 00:20:37.467 "cntlid": 0, 00:20:37.467 "vendor_id": "0x1b36", 00:20:37.467 "model_number": "QEMU NVMe Ctrl", 00:20:37.467 "serial_number": "12341", 00:20:37.467 "firmware_revision": "8.0.0", 00:20:37.467 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:37.467 "oacs": { 00:20:37.467 "security": 0, 00:20:37.467 "format": 1, 00:20:37.467 "firmware": 0, 00:20:37.467 "ns_manage": 1 00:20:37.467 }, 00:20:37.467 "multi_ctrlr": false, 00:20:37.467 "ana_reporting": false 00:20:37.467 }, 00:20:37.467 "vs": { 00:20:37.467 "nvme_version": "1.4" 00:20:37.467 }, 00:20:37.467 "ns_data": { 00:20:37.467 "id": 1, 00:20:37.467 "can_share": false 00:20:37.467 } 00:20:37.467 } 00:20:37.467 ], 00:20:37.467 "mp_policy": "active_passive" 00:20:37.467 } 00:20:37.467 } 00:20:37.467 ]' 00:20:37.467 15:07:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:37.467 15:07:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:37.467 15:07:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:37.467 15:07:01 -- common/autotest_common.sh@1373 -- # nb=1310720 00:20:37.467 15:07:01 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:20:37.467 15:07:01 -- common/autotest_common.sh@1377 -- # echo 5120 00:20:37.467 15:07:01 -- ftl/common.sh@63 -- # base_size=5120 00:20:37.467 15:07:01 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:20:37.467 15:07:01 -- ftl/common.sh@67 -- # clear_lvols 00:20:37.727 15:07:01 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.727 15:07:01 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:37.727 15:07:01 -- ftl/common.sh@28 -- # stores=3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0 00:20:37.727 15:07:01 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:37.727 15:07:01 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e0d3a64-8fea-4e73-b7e0-ff42368e7dd0 00:20:37.987 15:07:01 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:20:38.247 15:07:01 -- ftl/common.sh@68 -- # lvs=d433713a-75ed-4801-b07d-b7888aa14850 00:20:38.247 15:07:01 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d433713a-75ed-4801-b07d-b7888aa14850 00:20:38.508 15:07:01 -- ftl/common.sh@107 -- # base_bdev=cf91a558-1c77-4340-beab-a81ece622d6e 00:20:38.508 15:07:01 -- ftl/common.sh@108 -- # [[ -z cf91a558-1c77-4340-beab-a81ece622d6e ]] 00:20:38.508 15:07:01 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 cf91a558-1c77-4340-beab-a81ece622d6e 5120 00:20:38.508 15:07:01 -- ftl/common.sh@35 -- # local name=cache 00:20:38.508 15:07:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:20:38.508 15:07:01 -- ftl/common.sh@37 -- # local base_bdev=cf91a558-1c77-4340-beab-a81ece622d6e 00:20:38.508 15:07:01 -- ftl/common.sh@38 -- # local cache_size=5120 00:20:38.508 15:07:01 -- ftl/common.sh@41 -- # get_bdev_size cf91a558-1c77-4340-beab-a81ece622d6e 00:20:38.508 15:07:01 -- common/autotest_common.sh@1367 -- # local bdev_name=cf91a558-1c77-4340-beab-a81ece622d6e 00:20:38.508 15:07:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:38.508 15:07:01 -- common/autotest_common.sh@1369 -- # local bs 00:20:38.508 15:07:01 -- common/autotest_common.sh@1370 -- # local nb 00:20:38.508 15:07:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf91a558-1c77-4340-beab-a81ece622d6e 00:20:38.508 15:07:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:38.508 { 00:20:38.508 "name": "cf91a558-1c77-4340-beab-a81ece622d6e", 00:20:38.508 "aliases": [ 00:20:38.508 "lvs/basen1p0" 00:20:38.508 ], 00:20:38.508 "product_name": "Logical Volume", 00:20:38.508 "block_size": 4096, 00:20:38.508 "num_blocks": 5242880, 00:20:38.508 "uuid": "cf91a558-1c77-4340-beab-a81ece622d6e", 00:20:38.508 "assigned_rate_limits": { 00:20:38.508 "rw_ios_per_sec": 0, 00:20:38.508 "rw_mbytes_per_sec": 0, 00:20:38.508 "r_mbytes_per_sec": 0, 00:20:38.508 "w_mbytes_per_sec": 0 00:20:38.508 }, 00:20:38.508 "claimed": false, 00:20:38.508 "zoned": false, 00:20:38.508 "supported_io_types": { 00:20:38.508 "read": true, 00:20:38.508 "write": true, 00:20:38.508 "unmap": true, 00:20:38.508 "write_zeroes": true, 00:20:38.508 "flush": false, 00:20:38.508 "reset": true, 00:20:38.508 "compare": false, 00:20:38.508 "compare_and_write": false, 00:20:38.508 "abort": false, 00:20:38.508 "nvme_admin": false, 00:20:38.508 "nvme_io": false 00:20:38.508 }, 00:20:38.508 "driver_specific": { 00:20:38.508 "lvol": { 00:20:38.508 "lvol_store_uuid": "d433713a-75ed-4801-b07d-b7888aa14850", 00:20:38.508 "base_bdev": "basen1", 00:20:38.508 "thin_provision": true, 00:20:38.508 "snapshot": false, 00:20:38.508 "clone": false, 00:20:38.508 "esnap_clone": false 00:20:38.508 } 00:20:38.508 } 00:20:38.508 } 00:20:38.508 ]' 00:20:38.508 15:07:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:38.508 15:07:02 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:38.508 15:07:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:38.508 15:07:02 -- common/autotest_common.sh@1373 -- # nb=5242880 00:20:38.508 15:07:02 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:20:38.508 15:07:02 -- common/autotest_common.sh@1377 -- # echo 20480 00:20:38.768 15:07:02 -- ftl/common.sh@41 -- # local base_size=1024 00:20:38.768 15:07:02 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:38.768 15:07:02 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:20:38.768 15:07:02 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:20:38.768 15:07:02 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:20:38.768 15:07:02 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:20:39.028 15:07:02 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:20:39.028 15:07:02 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:20:39.028 15:07:02 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d cf91a558-1c77-4340-beab-a81ece622d6e -c cachen1p0 --l2p_dram_limit 2 00:20:39.290 [2024-11-18 15:07:02.708615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.708662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:20:39.290 [2024-11-18 15:07:02.708678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:20:39.290 [2024-11-18 15:07:02.708686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.708745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.708756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:20:39.290 [2024-11-18 15:07:02.708768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:20:39.290 [2024-11-18 15:07:02.708775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.708797] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:20:39.290 [2024-11-18 15:07:02.709043] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:20:39.290 [2024-11-18 15:07:02.709062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.709070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:20:39.290 [2024-11-18 15:07:02.709081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:20:39.290 [2024-11-18 15:07:02.709089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.709119] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 7b24f346-7401-4176-82c6-3f5072062c4c 00:20:39.290 [2024-11-18 15:07:02.710465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.710496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:20:39.290 [2024-11-18 15:07:02.710509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:20:39.290 [2024-11-18 15:07:02.710519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.717533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.717569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:20:39.290 [2024-11-18 15:07:02.717579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.942 ms 00:20:39.290 [2024-11-18 15:07:02.717593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.717634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.717646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:20:39.290 [2024-11-18 15:07:02.717654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:20:39.290 [2024-11-18 15:07:02.717663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.717711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.717723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:20:39.290 [2024-11-18 15:07:02.717736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:20:39.290 [2024-11-18 15:07:02.717746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.717769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:20:39.290 [2024-11-18 15:07:02.719556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.719583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:20:39.290 [2024-11-18 15:07:02.719598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.790 ms 00:20:39.290 [2024-11-18 15:07:02.719606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.719636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.719645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:20:39.290 [2024-11-18 15:07:02.719659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:20:39.290 [2024-11-18 15:07:02.719666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.719685] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:20:39.290 [2024-11-18 15:07:02.719796] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:20:39.290 [2024-11-18 15:07:02.719810] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:20:39.290 [2024-11-18 15:07:02.719823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:20:39.290 [2024-11-18 15:07:02.719837] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:20:39.290 [2024-11-18 15:07:02.719845] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:20:39.290 [2024-11-18 15:07:02.719855] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:20:39.290 [2024-11-18 15:07:02.719862] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:20:39.290 [2024-11-18 15:07:02.719877] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:20:39.290 [2024-11-18 15:07:02.719885] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:20:39.290 [2024-11-18 15:07:02.719894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.719901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:20:39.290 [2024-11-18 15:07:02.719911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:20:39.290 [2024-11-18 15:07:02.719918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.719986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.290 [2024-11-18 15:07:02.719995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:20:39.290 [2024-11-18 15:07:02.720005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:20:39.290 [2024-11-18 15:07:02.720012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.290 [2024-11-18 15:07:02.720087] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:20:39.290 [2024-11-18 15:07:02.720097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:20:39.291 [2024-11-18 15:07:02.720106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:20:39.291 [2024-11-18 15:07:02.720129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:20:39.291 [2024-11-18 15:07:02.720146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:20:39.291 [2024-11-18 15:07:02.720154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:20:39.291 [2024-11-18 15:07:02.720160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:20:39.291 [2024-11-18 15:07:02.720178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:20:39.291 [2024-11-18 15:07:02.720188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:20:39.291 [2024-11-18 15:07:02.720203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:20:39.291 [2024-11-18 15:07:02.720227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:20:39.291 [2024-11-18 15:07:02.720235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:20:39.291 [2024-11-18 15:07:02.720250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:20:39.291 [2024-11-18 15:07:02.720257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:20:39.291 [2024-11-18 15:07:02.720272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:20:39.291 [2024-11-18 15:07:02.720295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:20:39.291 [2024-11-18 15:07:02.720338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:20:39.291 [2024-11-18 15:07:02.720363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:20:39.291 [2024-11-18 15:07:02.720384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:20:39.291 [2024-11-18 15:07:02.720423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720437] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:20:39.291 [2024-11-18 15:07:02.720447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:20:39.291 [2024-11-18 15:07:02.720458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:39.291 [2024-11-18 15:07:02.720480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:20:39.291 [2024-11-18 15:07:02.720488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:20:39.291 [2024-11-18 15:07:02.720496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:20:39.291 [2024-11-18 15:07:02.720504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:20:39.291 [2024-11-18 15:07:02.720512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:20:39.291 [2024-11-18 15:07:02.720518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:20:39.291 [2024-11-18 15:07:02.720528] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:20:39.291 [2024-11-18 15:07:02.720537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:20:39.291 [2024-11-18 15:07:02.720554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:20:39.291 [2024-11-18 15:07:02.720581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:20:39.291 [2024-11-18 15:07:02.720588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:20:39.291 [2024-11-18 15:07:02.720596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:20:39.291 [2024-11-18 15:07:02.720603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:20:39.291 [2024-11-18 15:07:02.720646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:20:39.291 [2024-11-18 15:07:02.720653] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:20:39.291 [2024-11-18 15:07:02.720668] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720677] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.291 [2024-11-18 15:07:02.720685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:20:39.291 [2024-11-18 15:07:02.720693] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:20:39.291 [2024-11-18 15:07:02.720702] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:20:39.291 [2024-11-18 15:07:02.720715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.291 [2024-11-18 15:07:02.720734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:20:39.291 [2024-11-18 15:07:02.720743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.674 ms 00:20:39.291 [2024-11-18 15:07:02.720753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.291 [2024-11-18 15:07:02.728097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.291 [2024-11-18 15:07:02.728213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:20:39.291 [2024-11-18 15:07:02.728267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.298 ms 00:20:39.291 [2024-11-18 15:07:02.728293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.291 [2024-11-18 15:07:02.728356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.291 [2024-11-18 15:07:02.728390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:20:39.291 [2024-11-18 15:07:02.728412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:20:39.291 [2024-11-18 15:07:02.728434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.291 [2024-11-18 15:07:02.739307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.291 [2024-11-18 15:07:02.739426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:20:39.291 [2024-11-18 15:07:02.739478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.816 ms 00:20:39.291 [2024-11-18 15:07:02.739503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.291 [2024-11-18 15:07:02.739542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.291 [2024-11-18 15:07:02.739569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:20:39.291 [2024-11-18 15:07:02.739589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:20:39.292 [2024-11-18 15:07:02.739614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.740048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.292 [2024-11-18 15:07:02.740145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:20:39.292 [2024-11-18 15:07:02.740236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:20:39.292 [2024-11-18 15:07:02.740262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.740350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.292 [2024-11-18 15:07:02.740426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:20:39.292 [2024-11-18 15:07:02.740480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:20:39.292 [2024-11-18 15:07:02.740506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.747289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.292 [2024-11-18 15:07:02.747401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:20:39.292 [2024-11-18 15:07:02.747450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.723 ms 00:20:39.292 [2024-11-18 15:07:02.747476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.756506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:20:39.292 [2024-11-18 15:07:02.757570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.292 [2024-11-18 15:07:02.757658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:20:39.292 [2024-11-18 15:07:02.757713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.016 ms 00:20:39.292 [2024-11-18 15:07:02.757735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.774446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:39.292 [2024-11-18 15:07:02.774553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:20:39.292 [2024-11-18 15:07:02.774603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.669 ms 00:20:39.292 [2024-11-18 15:07:02.774627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:39.292 [2024-11-18 15:07:02.774679] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:20:39.292 [2024-11-18 15:07:02.774722] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:20:42.593 [2024-11-18 15:07:05.617767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.593 [2024-11-18 15:07:05.617954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:20:42.593 [2024-11-18 15:07:05.618028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2843.071 ms 00:20:42.593 [2024-11-18 15:07:05.618054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.593 [2024-11-18 15:07:05.618197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.593 [2024-11-18 15:07:05.618259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:20:42.593 [2024-11-18 15:07:05.618308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:20:42.593 [2024-11-18 15:07:05.618348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.593 [2024-11-18 15:07:05.621425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.593 [2024-11-18 15:07:05.621546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:20:42.593 [2024-11-18 15:07:05.621607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.006 ms 00:20:42.593 [2024-11-18 15:07:05.621631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.593 [2024-11-18 15:07:05.628278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.593 [2024-11-18 15:07:05.628605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:20:42.593 [2024-11-18 15:07:05.628790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.396 ms 00:20:42.593 [2024-11-18 15:07:05.628936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.593 [2024-11-18 15:07:05.629524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.629731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:20:42.594 [2024-11-18 15:07:05.629883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:20:42.594 [2024-11-18 15:07:05.630020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.656766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.656874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:20:42.594 [2024-11-18 15:07:05.656927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.574 ms 00:20:42.594 [2024-11-18 15:07:05.656951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.662177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.662291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:20:42.594 [2024-11-18 15:07:05.662370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.898 ms 00:20:42.594 [2024-11-18 15:07:05.662397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.663736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.663834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:20:42.594 [2024-11-18 15:07:05.663883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.296 ms 00:20:42.594 [2024-11-18 15:07:05.663906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.667222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.667335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:20:42.594 [2024-11-18 15:07:05.667387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.278 ms 00:20:42.594 [2024-11-18 15:07:05.667409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.667711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.667781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:20:42.594 [2024-11-18 15:07:05.667808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:20:42.594 [2024-11-18 15:07:05.667856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.667959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:42.594 [2024-11-18 15:07:05.667986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:20:42.594 [2024-11-18 15:07:05.668054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:20:42.594 [2024-11-18 15:07:05.668079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:42.594 [2024-11-18 15:07:05.669176] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2960.069 ms, result 0 00:20:42.594 { 00:20:42.594 "name": "ftl", 00:20:42.594 "uuid": "7b24f346-7401-4176-82c6-3f5072062c4c" 00:20:42.594 } 00:20:42.594 15:07:05 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:20:42.594 [2024-11-18 15:07:05.857749] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.594 15:07:05 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:20:42.594 15:07:06 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:20:42.855 [2024-11-18 15:07:06.234183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:20:42.855 15:07:06 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:20:42.855 [2024-11-18 15:07:06.422581] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:42.855 15:07:06 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:43.429 Fill FTL, iteration 1 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:20:43.429 15:07:06 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:20:43.429 15:07:06 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:43.429 15:07:06 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:43.429 15:07:06 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:43.430 15:07:06 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:20:43.430 15:07:06 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:20:43.430 15:07:06 -- ftl/common.sh@163 -- # spdk_ini_pid=86801 00:20:43.430 15:07:06 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:20:43.430 15:07:06 -- ftl/common.sh@165 -- # waitforlisten 86801 /var/tmp/spdk.tgt.sock 00:20:43.430 15:07:06 -- common/autotest_common.sh@829 -- # '[' -z 86801 ']' 00:20:43.430 15:07:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:20:43.430 15:07:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.430 15:07:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:20:43.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:20:43.430 15:07:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.430 15:07:06 -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 [2024-11-18 15:07:06.795839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:43.430 [2024-11-18 15:07:06.796137] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86801 ] 00:20:43.430 [2024-11-18 15:07:06.942073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.430 [2024-11-18 15:07:06.982789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:43.430 [2024-11-18 15:07:06.983195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.371 15:07:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.371 15:07:07 -- common/autotest_common.sh@862 -- # return 0 00:20:44.371 15:07:07 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:20:44.371 ftln1 00:20:44.371 15:07:07 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:20:44.371 15:07:07 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:20:44.630 15:07:08 -- ftl/common.sh@173 -- # echo ']}' 00:20:44.630 15:07:08 -- ftl/common.sh@176 -- # killprocess 86801 00:20:44.630 15:07:08 -- common/autotest_common.sh@936 -- # '[' -z 86801 ']' 00:20:44.630 15:07:08 -- common/autotest_common.sh@940 -- # kill -0 86801 00:20:44.630 15:07:08 -- common/autotest_common.sh@941 -- # uname 00:20:44.630 15:07:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.630 15:07:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86801 00:20:44.630 killing process with pid 86801 00:20:44.630 15:07:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:44.630 15:07:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:44.630 15:07:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86801' 00:20:44.630 15:07:08 -- common/autotest_common.sh@955 -- # kill 86801 00:20:44.630 15:07:08 -- common/autotest_common.sh@960 -- # wait 86801 00:20:44.888 15:07:08 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:20:44.888 15:07:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:20:44.888 [2024-11-18 15:07:08.458441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:44.888 [2024-11-18 15:07:08.458549] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86827 ] 00:20:45.147 [2024-11-18 15:07:08.604060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.147 [2024-11-18 15:07:08.643238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.530  [2024-11-18T15:07:11.062Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-18T15:07:11.996Z] Copying: 404/1024 [MB] (203 MBps) [2024-11-18T15:07:12.928Z] Copying: 675/1024 [MB] (271 MBps) [2024-11-18T15:07:13.186Z] Copying: 951/1024 [MB] (276 MBps) [2024-11-18T15:07:13.447Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:20:49.857 00:20:49.857 15:07:13 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:20:49.857 15:07:13 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:20:49.857 Calculate MD5 checksum, iteration 1 00:20:49.857 15:07:13 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:49.857 15:07:13 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:49.857 15:07:13 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:49.857 15:07:13 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:49.857 15:07:13 -- ftl/common.sh@154 -- # return 0 00:20:49.857 15:07:13 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:49.857 [2024-11-18 15:07:13.358187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:49.857 [2024-11-18 15:07:13.358505] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86880 ] 00:20:50.117 [2024-11-18 15:07:13.498556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.117 [2024-11-18 15:07:13.535534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.143  [2024-11-18T15:07:15.299Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-18T15:07:15.560Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:20:51.970 00:20:51.970 15:07:15 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:20:51.970 15:07:15 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:20:53.890 Fill FTL, iteration 2 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a5ff3fa412500b95d6cb5cca7c03cdfd 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:20:53.890 15:07:17 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:20:53.890 15:07:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:53.890 15:07:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:53.890 15:07:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:53.890 15:07:17 -- ftl/common.sh@154 -- # return 0 00:20:53.890 15:07:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:20:54.149 [2024-11-18 15:07:17.498429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:54.149 [2024-11-18 15:07:17.498554] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86930 ] 00:20:54.149 [2024-11-18 15:07:17.645876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.149 [2024-11-18 15:07:17.686172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.529  [2024-11-18T15:07:20.059Z] Copying: 208/1024 [MB] (208 MBps) [2024-11-18T15:07:20.997Z] Copying: 464/1024 [MB] (256 MBps) [2024-11-18T15:07:21.936Z] Copying: 745/1024 [MB] (281 MBps) [2024-11-18T15:07:22.197Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:20:58.607 00:20:58.607 15:07:22 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:20:58.607 15:07:22 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:20:58.607 Calculate MD5 checksum, iteration 2 00:20:58.607 15:07:22 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:58.607 15:07:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:58.607 15:07:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:58.607 15:07:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:58.607 15:07:22 -- ftl/common.sh@154 -- # return 0 00:20:58.607 15:07:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:58.607 [2024-11-18 15:07:22.109683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:58.607 [2024-11-18 15:07:22.109953] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86983 ] 00:20:58.867 [2024-11-18 15:07:22.249848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.867 [2024-11-18 15:07:22.286529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.244  [2024-11-18T15:07:24.096Z] Copying: 718/1024 [MB] (718 MBps) [2024-11-18T15:07:24.667Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:21:01.077 00:21:01.077 15:07:24 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:21:01.077 15:07:24 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bd8755e97ea045b5ad8fdd0a9f06d607 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:21:03.611 [2024-11-18 15:07:26.922652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.611 [2024-11-18 15:07:26.922820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:03.611 [2024-11-18 15:07:26.922840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:21:03.611 [2024-11-18 15:07:26.922848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.611 [2024-11-18 15:07:26.922878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.611 [2024-11-18 15:07:26.922886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:03.611 [2024-11-18 15:07:26.922897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:03.611 [2024-11-18 15:07:26.922904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.611 [2024-11-18 15:07:26.922925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.611 [2024-11-18 15:07:26.922932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:03.611 [2024-11-18 15:07:26.922938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:03.611 [2024-11-18 15:07:26.922944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.611 [2024-11-18 15:07:26.923001] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.338 ms, result 0 00:21:03.611 true 00:21:03.611 15:07:26 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:03.611 { 00:21:03.611 "name": "ftl", 00:21:03.611 "properties": [ 00:21:03.611 { 00:21:03.611 "name": "superblock_version", 00:21:03.611 "value": 5, 00:21:03.611 "read-only": true 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "name": "base_device", 00:21:03.611 "bands": [ 00:21:03.611 { 00:21:03.611 "id": 0, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 1, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 2, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 3, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 4, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 5, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 6, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 7, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 8, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 9, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 10, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 11, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 12, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 13, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 14, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 15, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.611 { 00:21:03.611 "id": 16, 00:21:03.611 "state": "FREE", 00:21:03.611 "validity": 0.0 00:21:03.611 }, 00:21:03.612 { 00:21:03.612 "id": 17, 00:21:03.612 "state": "FREE", 00:21:03.612 "validity": 0.0 00:21:03.612 } 00:21:03.612 ], 00:21:03.612 "read-only": true 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "name": "cache_device", 00:21:03.612 "type": "bdev", 00:21:03.612 "chunks": [ 00:21:03.612 { 00:21:03.612 "id": 0, 00:21:03.612 "state": "CLOSED", 00:21:03.612 "utilization": 1.0 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "id": 1, 00:21:03.612 "state": "CLOSED", 00:21:03.612 "utilization": 1.0 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "id": 2, 00:21:03.612 "state": "OPEN", 00:21:03.612 "utilization": 0.001953125 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "id": 3, 00:21:03.612 "state": "OPEN", 00:21:03.612 "utilization": 0.0 00:21:03.612 } 00:21:03.612 ], 00:21:03.612 "read-only": true 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "name": "verbose_mode", 00:21:03.612 "value": true, 00:21:03.612 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:21:03.612 }, 00:21:03.612 { 00:21:03.612 "name": "prep_upgrade_on_shutdown", 00:21:03.612 "value": false, 00:21:03.612 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:21:03.612 } 00:21:03.612 ] 00:21:03.612 } 00:21:03.612 15:07:27 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:21:03.871 [2024-11-18 15:07:27.262868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.871 [2024-11-18 15:07:27.262911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:03.871 [2024-11-18 15:07:27.262923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:03.871 [2024-11-18 15:07:27.262929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.871 [2024-11-18 15:07:27.262947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.871 [2024-11-18 15:07:27.262954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:03.871 [2024-11-18 15:07:27.262961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:03.871 [2024-11-18 15:07:27.262967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.871 [2024-11-18 15:07:27.262982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:03.871 [2024-11-18 15:07:27.262989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:03.871 [2024-11-18 15:07:27.262995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:03.871 [2024-11-18 15:07:27.263001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:03.871 [2024-11-18 15:07:27.263048] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.174 ms, result 0 00:21:03.871 true 00:21:03.871 15:07:27 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:21:03.871 15:07:27 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:21:03.871 15:07:27 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:04.130 15:07:27 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:21:04.130 15:07:27 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:21:04.130 15:07:27 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:21:04.130 [2024-11-18 15:07:27.647195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:04.130 [2024-11-18 15:07:27.647236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:04.130 [2024-11-18 15:07:27.647246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:04.130 [2024-11-18 15:07:27.647252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:04.130 [2024-11-18 15:07:27.647270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:04.130 [2024-11-18 15:07:27.647277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:04.130 [2024-11-18 15:07:27.647283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:04.130 [2024-11-18 15:07:27.647289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:04.130 [2024-11-18 15:07:27.647305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:04.130 [2024-11-18 15:07:27.647311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:04.130 [2024-11-18 15:07:27.647335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:04.130 [2024-11-18 15:07:27.647341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:04.131 [2024-11-18 15:07:27.647388] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:21:04.131 true 00:21:04.131 15:07:27 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:04.390 { 00:21:04.390 "name": "ftl", 00:21:04.390 "properties": [ 00:21:04.390 { 00:21:04.390 "name": "superblock_version", 00:21:04.390 "value": 5, 00:21:04.390 "read-only": true 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "name": "base_device", 00:21:04.390 "bands": [ 00:21:04.390 { 00:21:04.390 "id": 0, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 1, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 2, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 3, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 4, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 5, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 6, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 7, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 8, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 9, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 10, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 11, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 12, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.390 { 00:21:04.390 "id": 13, 00:21:04.390 "state": "FREE", 00:21:04.390 "validity": 0.0 00:21:04.390 }, 00:21:04.391 { 00:21:04.391 "id": 14, 00:21:04.391 "state": "FREE", 00:21:04.391 "validity": 0.0 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 15, 00:21:04.391 "state": "FREE", 00:21:04.391 "validity": 0.0 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 16, 00:21:04.391 "state": "FREE", 00:21:04.391 "validity": 0.0 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 17, 00:21:04.391 "state": "FREE", 00:21:04.391 "validity": 0.0 00:21:04.391 } 00:21:04.391 ], 00:21:04.391 "read-only": true 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "name": "cache_device", 00:21:04.391 "type": "bdev", 00:21:04.391 "chunks": [ 00:21:04.391 { 00:21:04.391 "id": 0, 00:21:04.391 "state": "CLOSED", 00:21:04.391 "utilization": 1.0 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 1, 00:21:04.391 "state": "CLOSED", 00:21:04.391 "utilization": 1.0 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 2, 00:21:04.391 "state": "OPEN", 00:21:04.391 "utilization": 0.001953125 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "id": 3, 00:21:04.391 "state": "OPEN", 00:21:04.391 "utilization": 0.0 00:21:04.391 } 00:21:04.391 ], 00:21:04.391 "read-only": true 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "name": "verbose_mode", 00:21:04.391 "value": true, 00:21:04.391 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:21:04.391 }, 00:21:04.391 { 00:21:04.391 "name": "prep_upgrade_on_shutdown", 00:21:04.391 "value": true, 00:21:04.391 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:21:04.391 } 00:21:04.391 ] 00:21:04.391 } 00:21:04.391 15:07:27 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:21:04.391 15:07:27 -- ftl/common.sh@130 -- # [[ -n 86686 ]] 00:21:04.391 15:07:27 -- ftl/common.sh@131 -- # killprocess 86686 00:21:04.391 15:07:27 -- common/autotest_common.sh@936 -- # '[' -z 86686 ']' 00:21:04.391 15:07:27 -- common/autotest_common.sh@940 -- # kill -0 86686 00:21:04.391 15:07:27 -- common/autotest_common.sh@941 -- # uname 00:21:04.391 15:07:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.391 15:07:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86686 00:21:04.391 killing process with pid 86686 00:21:04.391 15:07:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.391 15:07:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.391 15:07:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86686' 00:21:04.391 15:07:27 -- common/autotest_common.sh@955 -- # kill 86686 00:21:04.391 15:07:27 -- common/autotest_common.sh@960 -- # wait 86686 00:21:04.391 [2024-11-18 15:07:27.946881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:21:04.391 [2024-11-18 15:07:27.950712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:04.391 [2024-11-18 15:07:27.950745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:21:04.391 [2024-11-18 15:07:27.950755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:21:04.391 [2024-11-18 15:07:27.950764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:04.391 [2024-11-18 15:07:27.950784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:21:04.391 [2024-11-18 15:07:27.951299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:04.391 [2024-11-18 15:07:27.951330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:21:04.391 [2024-11-18 15:07:27.951340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.503 ms 00:21:04.391 [2024-11-18 15:07:27.951346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.523 [2024-11-18 15:07:34.676386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.523 [2024-11-18 15:07:34.676627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:21:12.523 [2024-11-18 15:07:34.676646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6724.992 ms 00:21:12.523 [2024-11-18 15:07:34.676653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.677858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.677880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:21:12.524 [2024-11-18 15:07:34.677893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:21:12.524 [2024-11-18 15:07:34.677900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.678788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.678808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:21:12.524 [2024-11-18 15:07:34.678817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:21:12.524 [2024-11-18 15:07:34.678823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.680375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.680404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:21:12.524 [2024-11-18 15:07:34.680412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.515 ms 00:21:12.524 [2024-11-18 15:07:34.680418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.682359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.682391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:21:12.524 [2024-11-18 15:07:34.682399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.917 ms 00:21:12.524 [2024-11-18 15:07:34.682405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.682460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.682468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:21:12.524 [2024-11-18 15:07:34.682476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:21:12.524 [2024-11-18 15:07:34.682482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.683521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.683633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:21:12.524 [2024-11-18 15:07:34.683645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.026 ms 00:21:12.524 [2024-11-18 15:07:34.683650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.684782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.684803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:21:12.524 [2024-11-18 15:07:34.684810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:21:12.524 [2024-11-18 15:07:34.684816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.685896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.685922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:21:12.524 [2024-11-18 15:07:34.685929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.058 ms 00:21:12.524 [2024-11-18 15:07:34.685935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.686882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.686979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:21:12.524 [2024-11-18 15:07:34.686991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:21:12.524 [2024-11-18 15:07:34.686997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.687018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:21:12.524 [2024-11-18 15:07:34.687030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:12.524 [2024-11-18 15:07:34.687043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:21:12.524 [2024-11-18 15:07:34.687051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:21:12.524 [2024-11-18 15:07:34.687063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.524 [2024-11-18 15:07:34.687158] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:21:12.524 [2024-11-18 15:07:34.687164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7b24f346-7401-4176-82c6-3f5072062c4c 00:21:12.524 [2024-11-18 15:07:34.687171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:21:12.524 [2024-11-18 15:07:34.687176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:21:12.524 [2024-11-18 15:07:34.687182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:21:12.524 [2024-11-18 15:07:34.687188] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:21:12.524 [2024-11-18 15:07:34.687193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:21:12.524 [2024-11-18 15:07:34.687200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:21:12.524 [2024-11-18 15:07:34.687205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:21:12.524 [2024-11-18 15:07:34.687210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:21:12.524 [2024-11-18 15:07:34.687215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:21:12.524 [2024-11-18 15:07:34.687223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.687229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:21:12.524 [2024-11-18 15:07:34.687235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:21:12.524 [2024-11-18 15:07:34.687243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.688969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.688993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:21:12.524 [2024-11-18 15:07:34.689001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.714 ms 00:21:12.524 [2024-11-18 15:07:34.689008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.689064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:12.524 [2024-11-18 15:07:34.689072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:21:12.524 [2024-11-18 15:07:34.689083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:21:12.524 [2024-11-18 15:07:34.689088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.695008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.695035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:12.524 [2024-11-18 15:07:34.695043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.695049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.695071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.695078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:12.524 [2024-11-18 15:07:34.695089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.695095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.695131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.695139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:12.524 [2024-11-18 15:07:34.695146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.695152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.695164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.695171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:12.524 [2024-11-18 15:07:34.695178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.695188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.706226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.706399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:12.524 [2024-11-18 15:07:34.706412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.706418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.710620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.710646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:12.524 [2024-11-18 15:07:34.710664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.710671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.524 [2024-11-18 15:07:34.710726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.524 [2024-11-18 15:07:34.710734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:12.524 [2024-11-18 15:07:34.710740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.524 [2024-11-18 15:07:34.710746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.710771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.525 [2024-11-18 15:07:34.710778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:12.525 [2024-11-18 15:07:34.710785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.525 [2024-11-18 15:07:34.710791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.710850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.525 [2024-11-18 15:07:34.710858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:12.525 [2024-11-18 15:07:34.710864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.525 [2024-11-18 15:07:34.710870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.710894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.525 [2024-11-18 15:07:34.710901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:21:12.525 [2024-11-18 15:07:34.710907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.525 [2024-11-18 15:07:34.710913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.710949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.525 [2024-11-18 15:07:34.710956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:12.525 [2024-11-18 15:07:34.710962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.525 [2024-11-18 15:07:34.710968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.711007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:12.525 [2024-11-18 15:07:34.711015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:12.525 [2024-11-18 15:07:34.711021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:12.525 [2024-11-18 15:07:34.711027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:12.525 [2024-11-18 15:07:34.711139] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6760.402 ms, result 0 00:21:15.827 15:07:38 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:21:15.827 15:07:38 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:21:15.827 15:07:38 -- ftl/common.sh@81 -- # local base_bdev= 00:21:15.827 15:07:38 -- ftl/common.sh@82 -- # local cache_bdev= 00:21:15.827 15:07:38 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:15.827 15:07:38 -- ftl/common.sh@89 -- # spdk_tgt_pid=87167 00:21:15.827 15:07:38 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:15.827 15:07:38 -- ftl/common.sh@91 -- # waitforlisten 87167 00:21:15.827 15:07:38 -- common/autotest_common.sh@829 -- # '[' -z 87167 ']' 00:21:15.827 15:07:38 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:15.827 15:07:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.827 15:07:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.827 15:07:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.827 15:07:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.827 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:21:15.827 [2024-11-18 15:07:39.065816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:15.827 [2024-11-18 15:07:39.066138] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87167 ] 00:21:15.827 [2024-11-18 15:07:39.213525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.827 [2024-11-18 15:07:39.250707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:15.827 [2024-11-18 15:07:39.250879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.089 [2024-11-18 15:07:39.522615] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:21:16.089 [2024-11-18 15:07:39.522688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:21:16.089 [2024-11-18 15:07:39.658894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.658935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:21:16.089 [2024-11-18 15:07:39.658949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:21:16.089 [2024-11-18 15:07:39.658961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.659016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.659026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:16.089 [2024-11-18 15:07:39.659037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:21:16.089 [2024-11-18 15:07:39.659045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.659066] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:21:16.089 [2024-11-18 15:07:39.659294] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:21:16.089 [2024-11-18 15:07:39.659497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.659506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:16.089 [2024-11-18 15:07:39.659515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:21:16.089 [2024-11-18 15:07:39.659522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.660899] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:21:16.089 [2024-11-18 15:07:39.663933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.663967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:21:16.089 [2024-11-18 15:07:39.663983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.036 ms 00:21:16.089 [2024-11-18 15:07:39.663990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.664052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.664063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:21:16.089 [2024-11-18 15:07:39.664072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:21:16.089 [2024-11-18 15:07:39.664079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.670654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.670827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:16.089 [2024-11-18 15:07:39.670849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.528 ms 00:21:16.089 [2024-11-18 15:07:39.670862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.670908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.670917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:16.089 [2024-11-18 15:07:39.670925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:21:16.089 [2024-11-18 15:07:39.670936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.670976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.670991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:21:16.089 [2024-11-18 15:07:39.670999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:21:16.089 [2024-11-18 15:07:39.671006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.671035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:21:16.089 [2024-11-18 15:07:39.672711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.672745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:16.089 [2024-11-18 15:07:39.672754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.683 ms 00:21:16.089 [2024-11-18 15:07:39.672765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.672800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.089 [2024-11-18 15:07:39.672813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:21:16.089 [2024-11-18 15:07:39.672824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:16.089 [2024-11-18 15:07:39.672832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.089 [2024-11-18 15:07:39.672855] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:21:16.089 [2024-11-18 15:07:39.672874] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:21:16.089 [2024-11-18 15:07:39.672908] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:21:16.089 [2024-11-18 15:07:39.672923] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:21:16.089 [2024-11-18 15:07:39.672999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:21:16.089 [2024-11-18 15:07:39.673008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:21:16.089 [2024-11-18 15:07:39.673018] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:21:16.089 [2024-11-18 15:07:39.673027] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:21:16.089 [2024-11-18 15:07:39.673035] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:21:16.089 [2024-11-18 15:07:39.673042] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:21:16.090 [2024-11-18 15:07:39.673049] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:21:16.090 [2024-11-18 15:07:39.673056] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:21:16.090 [2024-11-18 15:07:39.673063] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:21:16.090 [2024-11-18 15:07:39.673071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.090 [2024-11-18 15:07:39.673077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:21:16.090 [2024-11-18 15:07:39.673087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:21:16.090 [2024-11-18 15:07:39.673095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.090 [2024-11-18 15:07:39.673158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.090 [2024-11-18 15:07:39.673166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:21:16.090 [2024-11-18 15:07:39.673172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:21:16.090 [2024-11-18 15:07:39.673184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.090 [2024-11-18 15:07:39.673259] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:21:16.090 [2024-11-18 15:07:39.673268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:21:16.090 [2024-11-18 15:07:39.673275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:21:16.090 [2024-11-18 15:07:39.673300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:21:16.090 [2024-11-18 15:07:39.673344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:21:16.090 [2024-11-18 15:07:39.673353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:21:16.090 [2024-11-18 15:07:39.673361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:21:16.090 [2024-11-18 15:07:39.673380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:21:16.090 [2024-11-18 15:07:39.673387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:21:16.090 [2024-11-18 15:07:39.673403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:21:16.090 [2024-11-18 15:07:39.673425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:21:16.090 [2024-11-18 15:07:39.673432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:21:16.090 [2024-11-18 15:07:39.673447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:21:16.090 [2024-11-18 15:07:39.673455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:21:16.090 [2024-11-18 15:07:39.673470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:21:16.090 [2024-11-18 15:07:39.673494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:21:16.090 [2024-11-18 15:07:39.673517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:21:16.090 [2024-11-18 15:07:39.673539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:21:16.090 [2024-11-18 15:07:39.673561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:21:16.090 [2024-11-18 15:07:39.673583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673597] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:21:16.090 [2024-11-18 15:07:39.673609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:21:16.090 [2024-11-18 15:07:39.673619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:16.090 [2024-11-18 15:07:39.673637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:21:16.090 [2024-11-18 15:07:39.673645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:21:16.090 [2024-11-18 15:07:39.673653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:21:16.090 [2024-11-18 15:07:39.673660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:21:16.090 [2024-11-18 15:07:39.673667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:21:16.090 [2024-11-18 15:07:39.673675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:21:16.090 [2024-11-18 15:07:39.673683] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:21:16.090 [2024-11-18 15:07:39.673696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:21:16.090 [2024-11-18 15:07:39.673714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:21:16.090 [2024-11-18 15:07:39.673742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:21:16.090 [2024-11-18 15:07:39.673751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:21:16.090 [2024-11-18 15:07:39.673761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:21:16.090 [2024-11-18 15:07:39.673769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:21:16.090 [2024-11-18 15:07:39.673807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:21:16.090 [2024-11-18 15:07:39.673813] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:21:16.090 [2024-11-18 15:07:39.673821] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673829] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:16.090 [2024-11-18 15:07:39.673836] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:21:16.090 [2024-11-18 15:07:39.673843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:21:16.090 [2024-11-18 15:07:39.673850] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:21:16.090 [2024-11-18 15:07:39.673858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.090 [2024-11-18 15:07:39.673866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:21:16.090 [2024-11-18 15:07:39.673875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:21:16.090 [2024-11-18 15:07:39.673884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.681594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.681716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:16.352 [2024-11-18 15:07:39.681766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.658 ms 00:21:16.352 [2024-11-18 15:07:39.681789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.681840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.681861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:21:16.352 [2024-11-18 15:07:39.681886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:21:16.352 [2024-11-18 15:07:39.681907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.693093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.693208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:16.352 [2024-11-18 15:07:39.693256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.126 ms 00:21:16.352 [2024-11-18 15:07:39.693278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.693337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.693367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:16.352 [2024-11-18 15:07:39.693389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:16.352 [2024-11-18 15:07:39.693408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.693857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.693906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:16.352 [2024-11-18 15:07:39.693927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:21:16.352 [2024-11-18 15:07:39.693945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.352 [2024-11-18 15:07:39.694005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.352 [2024-11-18 15:07:39.694027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:16.352 [2024-11-18 15:07:39.694095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:21:16.352 [2024-11-18 15:07:39.694121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.701270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.701386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:16.353 [2024-11-18 15:07:39.701435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.114 ms 00:21:16.353 [2024-11-18 15:07:39.701457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.704619] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:16.353 [2024-11-18 15:07:39.704740] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:21:16.353 [2024-11-18 15:07:39.704796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.704816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:21:16.353 [2024-11-18 15:07:39.704835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.233 ms 00:21:16.353 [2024-11-18 15:07:39.704857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.708824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.708927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:21:16.353 [2024-11-18 15:07:39.708975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.923 ms 00:21:16.353 [2024-11-18 15:07:39.709005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.710506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.710609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:21:16.353 [2024-11-18 15:07:39.710654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.455 ms 00:21:16.353 [2024-11-18 15:07:39.710675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.712249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.712362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:21:16.353 [2024-11-18 15:07:39.712410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.507 ms 00:21:16.353 [2024-11-18 15:07:39.712430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.712661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.712691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:21:16.353 [2024-11-18 15:07:39.712743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:21:16.353 [2024-11-18 15:07:39.712769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.734252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.734382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:21:16.353 [2024-11-18 15:07:39.734434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.450 ms 00:21:16.353 [2024-11-18 15:07:39.734457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.741984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:21:16.353 [2024-11-18 15:07:39.742796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.742832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:21:16.353 [2024-11-18 15:07:39.742848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.236 ms 00:21:16.353 [2024-11-18 15:07:39.742861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.742932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.742944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:21:16.353 [2024-11-18 15:07:39.742954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:16.353 [2024-11-18 15:07:39.742962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.743004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.743017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:21:16.353 [2024-11-18 15:07:39.743026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:21:16.353 [2024-11-18 15:07:39.743035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.744543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.744571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:21:16.353 [2024-11-18 15:07:39.744582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:21:16.353 [2024-11-18 15:07:39.744590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.744620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.744628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:21:16.353 [2024-11-18 15:07:39.744638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:16.353 [2024-11-18 15:07:39.744645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.744700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:21:16.353 [2024-11-18 15:07:39.744711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.744718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:21:16.353 [2024-11-18 15:07:39.744725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:21:16.353 [2024-11-18 15:07:39.744732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.748421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.748535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:21:16.353 [2024-11-18 15:07:39.748586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.669 ms 00:21:16.353 [2024-11-18 15:07:39.748612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.748753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.353 [2024-11-18 15:07:39.748778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:21:16.353 [2024-11-18 15:07:39.748830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:21:16.353 [2024-11-18 15:07:39.748852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.353 [2024-11-18 15:07:39.749870] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 90.584 ms, result 0 00:21:16.353 [2024-11-18 15:07:39.763283] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.353 [2024-11-18 15:07:39.779338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:21:16.353 [2024-11-18 15:07:39.787461] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:16.353 15:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.353 15:07:39 -- common/autotest_common.sh@862 -- # return 0 00:21:16.353 15:07:39 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:16.353 15:07:39 -- ftl/common.sh@95 -- # return 0 00:21:16.353 15:07:39 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:21:16.613 [2024-11-18 15:07:40.052406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.613 [2024-11-18 15:07:40.052449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:16.613 [2024-11-18 15:07:40.052466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:21:16.613 [2024-11-18 15:07:40.052474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.613 [2024-11-18 15:07:40.052498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.613 [2024-11-18 15:07:40.052506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:16.613 [2024-11-18 15:07:40.052515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:16.613 [2024-11-18 15:07:40.052528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.613 [2024-11-18 15:07:40.052548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:16.613 [2024-11-18 15:07:40.052556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:16.613 [2024-11-18 15:07:40.052570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:16.613 [2024-11-18 15:07:40.052580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:16.613 [2024-11-18 15:07:40.052638] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:21:16.613 true 00:21:16.613 15:07:40 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:16.872 { 00:21:16.872 "name": "ftl", 00:21:16.872 "properties": [ 00:21:16.872 { 00:21:16.872 "name": "superblock_version", 00:21:16.872 "value": 5, 00:21:16.872 "read-only": true 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "name": "base_device", 00:21:16.872 "bands": [ 00:21:16.872 { 00:21:16.872 "id": 0, 00:21:16.872 "state": "CLOSED", 00:21:16.872 "validity": 1.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 1, 00:21:16.872 "state": "CLOSED", 00:21:16.872 "validity": 1.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 2, 00:21:16.872 "state": "CLOSED", 00:21:16.872 "validity": 0.007843137254901933 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 3, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 4, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 5, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 6, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 7, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 8, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 9, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 10, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 11, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 12, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 13, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 14, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 15, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 16, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 17, 00:21:16.872 "state": "FREE", 00:21:16.872 "validity": 0.0 00:21:16.872 } 00:21:16.872 ], 00:21:16.872 "read-only": true 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "name": "cache_device", 00:21:16.872 "type": "bdev", 00:21:16.872 "chunks": [ 00:21:16.872 { 00:21:16.872 "id": 0, 00:21:16.872 "state": "OPEN", 00:21:16.872 "utilization": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 1, 00:21:16.872 "state": "OPEN", 00:21:16.872 "utilization": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 2, 00:21:16.872 "state": "FREE", 00:21:16.872 "utilization": 0.0 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "id": 3, 00:21:16.872 "state": "FREE", 00:21:16.872 "utilization": 0.0 00:21:16.872 } 00:21:16.872 ], 00:21:16.872 "read-only": true 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "name": "verbose_mode", 00:21:16.872 "value": true, 00:21:16.872 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:21:16.872 }, 00:21:16.872 { 00:21:16.872 "name": "prep_upgrade_on_shutdown", 00:21:16.872 "value": false, 00:21:16.872 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:21:16.872 } 00:21:16.872 ] 00:21:16.872 } 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:21:16.872 15:07:40 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:17.131 Validate MD5 checksum, iteration 1 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:21:17.131 15:07:40 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:17.131 15:07:40 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:17.131 15:07:40 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:17.131 15:07:40 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:17.131 15:07:40 -- ftl/common.sh@154 -- # return 0 00:21:17.131 15:07:40 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:17.131 [2024-11-18 15:07:40.710545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:17.132 [2024-11-18 15:07:40.710873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87193 ] 00:21:17.392 [2024-11-18 15:07:40.857004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.392 [2024-11-18 15:07:40.897956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.772  [2024-11-18T15:07:42.931Z] Copying: 690/1024 [MB] (690 MBps) [2024-11-18T15:07:43.502Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:21:19.912 00:21:19.912 15:07:43 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:21:19.912 15:07:43 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@103 -- # sum=a5ff3fa412500b95d6cb5cca7c03cdfd 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@105 -- # [[ a5ff3fa412500b95d6cb5cca7c03cdfd != \a\5\f\f\3\f\a\4\1\2\5\0\0\b\9\5\d\6\c\b\5\c\c\a\7\c\0\3\c\d\f\d ]] 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:21:22.452 Validate MD5 checksum, iteration 2 00:21:22.452 15:07:45 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:22.452 15:07:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:22.452 15:07:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:22.452 15:07:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:22.452 15:07:45 -- ftl/common.sh@154 -- # return 0 00:21:22.452 15:07:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:22.452 [2024-11-18 15:07:45.614229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:22.452 [2024-11-18 15:07:45.614340] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87254 ] 00:21:22.452 [2024-11-18 15:07:45.746430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.452 [2024-11-18 15:07:45.784656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.827  [2024-11-18T15:07:47.678Z] Copying: 695/1024 [MB] (695 MBps) [2024-11-18T15:07:48.254Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:21:24.664 00:21:24.664 15:07:47 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:21:24.664 15:07:47 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@103 -- # sum=bd8755e97ea045b5ad8fdd0a9f06d607 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@105 -- # [[ bd8755e97ea045b5ad8fdd0a9f06d607 != \b\d\8\7\5\5\e\9\7\e\a\0\4\5\b\5\a\d\8\f\d\d\0\a\9\f\0\6\d\6\0\7 ]] 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:21:26.568 15:07:50 -- ftl/common.sh@137 -- # [[ -n 87167 ]] 00:21:26.568 15:07:50 -- ftl/common.sh@138 -- # kill -9 87167 00:21:26.568 15:07:50 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:21:26.568 15:07:50 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:21:26.568 15:07:50 -- ftl/common.sh@81 -- # local base_bdev= 00:21:26.568 15:07:50 -- ftl/common.sh@82 -- # local cache_bdev= 00:21:26.568 15:07:50 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:26.568 15:07:50 -- ftl/common.sh@89 -- # spdk_tgt_pid=87304 00:21:26.568 15:07:50 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:26.568 15:07:50 -- ftl/common.sh@91 -- # waitforlisten 87304 00:21:26.568 15:07:50 -- common/autotest_common.sh@829 -- # '[' -z 87304 ']' 00:21:26.568 15:07:50 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:26.568 15:07:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.568 15:07:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.568 15:07:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.568 15:07:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.568 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:21:26.828 [2024-11-18 15:07:50.163877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:26.828 [2024-11-18 15:07:50.163972] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87304 ] 00:21:26.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 87167 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:21:26.828 [2024-11-18 15:07:50.305531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.828 [2024-11-18 15:07:50.343803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:26.828 [2024-11-18 15:07:50.343989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.087 [2024-11-18 15:07:50.610429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:21:27.087 [2024-11-18 15:07:50.610489] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:21:27.347 [2024-11-18 15:07:50.746541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.746732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:21:27.347 [2024-11-18 15:07:50.746750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:27.347 [2024-11-18 15:07:50.746757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.746812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.746820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:27.347 [2024-11-18 15:07:50.746829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:21:27.347 [2024-11-18 15:07:50.746835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.746856] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:21:27.347 [2024-11-18 15:07:50.747048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:21:27.347 [2024-11-18 15:07:50.747067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.747073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:27.347 [2024-11-18 15:07:50.747079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:21:27.347 [2024-11-18 15:07:50.747084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.747307] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:21:27.347 [2024-11-18 15:07:50.751179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.751209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:21:27.347 [2024-11-18 15:07:50.751221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.872 ms 00:21:27.347 [2024-11-18 15:07:50.751227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.752171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.752197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:21:27.347 [2024-11-18 15:07:50.752205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:21:27.347 [2024-11-18 15:07:50.752211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.752607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.752622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:27.347 [2024-11-18 15:07:50.752629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:21:27.347 [2024-11-18 15:07:50.752635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.752661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.752671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:27.347 [2024-11-18 15:07:50.752678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:21:27.347 [2024-11-18 15:07:50.752683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.752706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.752713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:21:27.347 [2024-11-18 15:07:50.752721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:21:27.347 [2024-11-18 15:07:50.752727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.752747] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:21:27.347 [2024-11-18 15:07:50.753460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.753581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:27.347 [2024-11-18 15:07:50.753594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:21:27.347 [2024-11-18 15:07:50.753605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.753629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.753639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:21:27.347 [2024-11-18 15:07:50.753647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:21:27.347 [2024-11-18 15:07:50.753658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.753675] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:21:27.347 [2024-11-18 15:07:50.753692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:21:27.347 [2024-11-18 15:07:50.753718] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:21:27.347 [2024-11-18 15:07:50.753733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:21:27.347 [2024-11-18 15:07:50.753793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:21:27.347 [2024-11-18 15:07:50.753801] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:21:27.347 [2024-11-18 15:07:50.753809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:21:27.347 [2024-11-18 15:07:50.753816] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:21:27.347 [2024-11-18 15:07:50.753828] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:21:27.347 [2024-11-18 15:07:50.753835] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:21:27.347 [2024-11-18 15:07:50.753840] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:21:27.347 [2024-11-18 15:07:50.753846] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:21:27.347 [2024-11-18 15:07:50.753853] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:21:27.347 [2024-11-18 15:07:50.753859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.753865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:21:27.347 [2024-11-18 15:07:50.753874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:21:27.347 [2024-11-18 15:07:50.753879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.753927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.347 [2024-11-18 15:07:50.753933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:21:27.347 [2024-11-18 15:07:50.753939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:21:27.347 [2024-11-18 15:07:50.753948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.347 [2024-11-18 15:07:50.754004] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:21:27.347 [2024-11-18 15:07:50.754013] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:21:27.347 [2024-11-18 15:07:50.754019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:27.347 [2024-11-18 15:07:50.754028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:21:27.347 [2024-11-18 15:07:50.754042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:21:27.347 [2024-11-18 15:07:50.754056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:21:27.347 [2024-11-18 15:07:50.754064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:21:27.347 [2024-11-18 15:07:50.754071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:21:27.347 [2024-11-18 15:07:50.754081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:21:27.347 [2024-11-18 15:07:50.754086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:21:27.347 [2024-11-18 15:07:50.754097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754107] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:21:27.347 [2024-11-18 15:07:50.754112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:21:27.347 [2024-11-18 15:07:50.754117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.347 [2024-11-18 15:07:50.754123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:21:27.347 [2024-11-18 15:07:50.754128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:21:27.348 [2024-11-18 15:07:50.754133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:21:27.348 [2024-11-18 15:07:50.754145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:21:27.348 [2024-11-18 15:07:50.754160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:21:27.348 [2024-11-18 15:07:50.754176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:21:27.348 [2024-11-18 15:07:50.754193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:21:27.348 [2024-11-18 15:07:50.754210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.348 [2024-11-18 15:07:50.754221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:21:27.348 [2024-11-18 15:07:50.754228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.348 [2024-11-18 15:07:50.754240] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:21:27.348 [2024-11-18 15:07:50.754248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:21:27.348 [2024-11-18 15:07:50.754258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:27.348 [2024-11-18 15:07:50.754271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:21:27.348 [2024-11-18 15:07:50.754277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:21:27.348 [2024-11-18 15:07:50.754282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:21:27.348 [2024-11-18 15:07:50.754288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:21:27.348 [2024-11-18 15:07:50.754294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:21:27.348 [2024-11-18 15:07:50.754301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:21:27.348 [2024-11-18 15:07:50.754307] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:21:27.348 [2024-11-18 15:07:50.754326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:21:27.348 [2024-11-18 15:07:50.754344] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:21:27.348 [2024-11-18 15:07:50.754363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:21:27.348 [2024-11-18 15:07:50.754370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:21:27.348 [2024-11-18 15:07:50.754376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:21:27.348 [2024-11-18 15:07:50.754383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:21:27.348 [2024-11-18 15:07:50.754414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:21:27.348 [2024-11-18 15:07:50.754419] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:21:27.348 [2024-11-18 15:07:50.754426] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754434] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:27.348 [2024-11-18 15:07:50.754441] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:21:27.348 [2024-11-18 15:07:50.754446] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:21:27.348 [2024-11-18 15:07:50.754455] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:21:27.348 [2024-11-18 15:07:50.754462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.754469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:21:27.348 [2024-11-18 15:07:50.754479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.490 ms 00:21:27.348 [2024-11-18 15:07:50.754485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.759517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.759535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:27.348 [2024-11-18 15:07:50.759547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.998 ms 00:21:27.348 [2024-11-18 15:07:50.759553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.759582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.759591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:21:27.348 [2024-11-18 15:07:50.759598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:21:27.348 [2024-11-18 15:07:50.759605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.769794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.769822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:27.348 [2024-11-18 15:07:50.769831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.155 ms 00:21:27.348 [2024-11-18 15:07:50.769837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.769858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.769870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:27.348 [2024-11-18 15:07:50.769879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:27.348 [2024-11-18 15:07:50.769885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.769955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.769963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:27.348 [2024-11-18 15:07:50.769969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:21:27.348 [2024-11-18 15:07:50.769977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.770006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.770013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:27.348 [2024-11-18 15:07:50.770019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:21:27.348 [2024-11-18 15:07:50.770027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.776458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.776581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:27.348 [2024-11-18 15:07:50.776594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.414 ms 00:21:27.348 [2024-11-18 15:07:50.776601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.776670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.776678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:21:27.348 [2024-11-18 15:07:50.776687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:27.348 [2024-11-18 15:07:50.776693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.780857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.780885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:21:27.348 [2024-11-18 15:07:50.780898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.149 ms 00:21:27.348 [2024-11-18 15:07:50.780904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.781908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.781936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:21:27.348 [2024-11-18 15:07:50.781946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:21:27.348 [2024-11-18 15:07:50.781952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.799701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.348 [2024-11-18 15:07:50.799894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:21:27.348 [2024-11-18 15:07:50.799909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.722 ms 00:21:27.348 [2024-11-18 15:07:50.799915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.348 [2024-11-18 15:07:50.799977] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:21:27.349 [2024-11-18 15:07:50.800012] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:21:27.349 [2024-11-18 15:07:50.800043] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:21:27.349 [2024-11-18 15:07:50.800074] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:21:27.349 [2024-11-18 15:07:50.800081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.349 [2024-11-18 15:07:50.800087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:21:27.349 [2024-11-18 15:07:50.800097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.138 ms 00:21:27.349 [2024-11-18 15:07:50.800102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.349 [2024-11-18 15:07:50.800144] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:21:27.349 [2024-11-18 15:07:50.800152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.349 [2024-11-18 15:07:50.800158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:21:27.349 [2024-11-18 15:07:50.800164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:21:27.349 [2024-11-18 15:07:50.800172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.349 [2024-11-18 15:07:50.802583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.349 [2024-11-18 15:07:50.802609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:21:27.349 [2024-11-18 15:07:50.802617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.393 ms 00:21:27.349 [2024-11-18 15:07:50.802627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.349 [2024-11-18 15:07:50.803234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.349 [2024-11-18 15:07:50.803264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:21:27.349 [2024-11-18 15:07:50.803273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:21:27.349 [2024-11-18 15:07:50.803282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.349 [2024-11-18 15:07:50.803304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:27.349 [2024-11-18 15:07:50.803312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:21:27.349 [2024-11-18 15:07:50.803333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:21:27.349 [2024-11-18 15:07:50.803339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:27.349 [2024-11-18 15:07:50.803498] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:21:27.607 [2024-11-18 15:07:51.177907] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:21:27.607 [2024-11-18 15:07:51.178282] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:21:28.177 [2024-11-18 15:07:51.698477] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:21:28.177 [2024-11-18 15:07:51.698595] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:28.177 [2024-11-18 15:07:51.698610] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:21:28.177 [2024-11-18 15:07:51.698628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.698638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:21:28.177 [2024-11-18 15:07:51.698651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 895.271 ms 00:21:28.177 [2024-11-18 15:07:51.698659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.698692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.698714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:21:28.177 [2024-11-18 15:07:51.698723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:28.177 [2024-11-18 15:07:51.698730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.707128] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:21:28.177 [2024-11-18 15:07:51.707239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.707251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:21:28.177 [2024-11-18 15:07:51.707260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.492 ms 00:21:28.177 [2024-11-18 15:07:51.707275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.707960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.707984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:21:28.177 [2024-11-18 15:07:51.707993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:21:28.177 [2024-11-18 15:07:51.708000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.710222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.710243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:21:28.177 [2024-11-18 15:07:51.710252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.206 ms 00:21:28.177 [2024-11-18 15:07:51.710263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.714095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.714130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:21:28.177 [2024-11-18 15:07:51.714139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.811 ms 00:21:28.177 [2024-11-18 15:07:51.714153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.714239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.714253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:21:28.177 [2024-11-18 15:07:51.714263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:21:28.177 [2024-11-18 15:07:51.714270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.715717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.715738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:21:28.177 [2024-11-18 15:07:51.715747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.427 ms 00:21:28.177 [2024-11-18 15:07:51.715754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.715781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.715790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:21:28.177 [2024-11-18 15:07:51.715798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:28.177 [2024-11-18 15:07:51.715806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.715847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:21:28.177 [2024-11-18 15:07:51.715861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.715874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:21:28.177 [2024-11-18 15:07:51.715885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:21:28.177 [2024-11-18 15:07:51.715892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.715944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:28.177 [2024-11-18 15:07:51.715952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:21:28.177 [2024-11-18 15:07:51.715960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:21:28.177 [2024-11-18 15:07:51.715967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:28.177 [2024-11-18 15:07:51.716948] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 969.984 ms, result 0 00:21:28.177 [2024-11-18 15:07:51.732042] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.177 [2024-11-18 15:07:51.748075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:21:28.177 [2024-11-18 15:07:51.756178] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:28.748 15:07:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.748 15:07:52 -- common/autotest_common.sh@862 -- # return 0 00:21:28.748 15:07:52 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:28.748 15:07:52 -- ftl/common.sh@95 -- # return 0 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:21:28.748 Validate MD5 checksum, iteration 1 00:21:28.748 15:07:52 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:28.748 15:07:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:28.748 15:07:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:28.748 15:07:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:28.748 15:07:52 -- ftl/common.sh@154 -- # return 0 00:21:28.748 15:07:52 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:29.007 [2024-11-18 15:07:52.333329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:29.007 [2024-11-18 15:07:52.333434] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87338 ] 00:21:29.007 [2024-11-18 15:07:52.477973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.007 [2024-11-18 15:07:52.516099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.384  [2024-11-18T15:07:54.541Z] Copying: 714/1024 [MB] (714 MBps) [2024-11-18T15:07:55.107Z] Copying: 1024/1024 [MB] (average 705 MBps) 00:21:31.517 00:21:31.517 15:07:54 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:21:31.517 15:07:54 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@103 -- # sum=a5ff3fa412500b95d6cb5cca7c03cdfd 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@105 -- # [[ a5ff3fa412500b95d6cb5cca7c03cdfd != \a\5\f\f\3\f\a\4\1\2\5\0\0\b\9\5\d\6\c\b\5\c\c\a\7\c\0\3\c\d\f\d ]] 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:21:33.417 Validate MD5 checksum, iteration 2 00:21:33.417 15:07:56 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:33.417 15:07:56 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:33.417 15:07:56 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:33.417 15:07:56 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:33.417 15:07:56 -- ftl/common.sh@154 -- # return 0 00:21:33.417 15:07:56 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:33.417 [2024-11-18 15:07:56.961698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:33.417 [2024-11-18 15:07:56.961972] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87393 ] 00:21:33.676 [2024-11-18 15:07:57.101911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.676 [2024-11-18 15:07:57.140526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.051  [2024-11-18T15:07:59.209Z] Copying: 701/1024 [MB] (701 MBps) [2024-11-18T15:08:03.394Z] Copying: 1024/1024 [MB] (average 695 MBps) 00:21:39.804 00:21:39.804 15:08:02 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:21:39.805 15:08:02 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:41.705 15:08:04 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:21:41.705 15:08:04 -- ftl/upgrade_shutdown.sh@103 -- # sum=bd8755e97ea045b5ad8fdd0a9f06d607 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@105 -- # [[ bd8755e97ea045b5ad8fdd0a9f06d607 != \b\d\8\7\5\5\e\9\7\e\a\0\4\5\b\5\a\d\8\f\d\d\0\a\9\f\0\6\d\6\0\7 ]] 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:21:41.706 15:08:04 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:21:41.706 15:08:04 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:21:41.706 15:08:04 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:21:41.706 15:08:04 -- ftl/common.sh@130 -- # [[ -n 87304 ]] 00:21:41.706 15:08:04 -- ftl/common.sh@131 -- # killprocess 87304 00:21:41.706 15:08:04 -- common/autotest_common.sh@936 -- # '[' -z 87304 ']' 00:21:41.706 15:08:04 -- common/autotest_common.sh@940 -- # kill -0 87304 00:21:41.706 15:08:04 -- common/autotest_common.sh@941 -- # uname 00:21:41.706 15:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.706 15:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87304 00:21:41.706 killing process with pid 87304 00:21:41.706 15:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:41.706 15:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:41.706 15:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87304' 00:21:41.706 15:08:04 -- common/autotest_common.sh@955 -- # kill 87304 00:21:41.706 15:08:04 -- common/autotest_common.sh@960 -- # wait 87304 00:21:41.706 [2024-11-18 15:08:05.083828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:21:41.706 [2024-11-18 15:08:05.090662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.090796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:21:41.706 [2024-11-18 15:08:05.090853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:41.706 [2024-11-18 15:08:05.090876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.090909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:21:41.706 [2024-11-18 15:08:05.091454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.091544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:21:41.706 [2024-11-18 15:08:05.091590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:21:41.706 [2024-11-18 15:08:05.091609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.091825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.091847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:21:41.706 [2024-11-18 15:08:05.091863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:21:41.706 [2024-11-18 15:08:05.091924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.093345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.093372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:21:41.706 [2024-11-18 15:08:05.093381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.407 ms 00:21:41.706 [2024-11-18 15:08:05.093391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.094227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.094308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:21:41.706 [2024-11-18 15:08:05.094337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.813 ms 00:21:41.706 [2024-11-18 15:08:05.094344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.095994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.096028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:21:41.706 [2024-11-18 15:08:05.096036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.605 ms 00:21:41.706 [2024-11-18 15:08:05.096043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.097351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.097447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:21:41.706 [2024-11-18 15:08:05.097460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.282 ms 00:21:41.706 [2024-11-18 15:08:05.097467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.097529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.097542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:21:41.706 [2024-11-18 15:08:05.097548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:21:41.706 [2024-11-18 15:08:05.097555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.098629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.098649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:21:41.706 [2024-11-18 15:08:05.098656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.061 ms 00:21:41.706 [2024-11-18 15:08:05.098661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.099882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.099907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:21:41.706 [2024-11-18 15:08:05.099913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:21:41.706 [2024-11-18 15:08:05.099919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.100741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.100830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:21:41.706 [2024-11-18 15:08:05.100841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.797 ms 00:21:41.706 [2024-11-18 15:08:05.100846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.101853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.706 [2024-11-18 15:08:05.101876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:21:41.706 [2024-11-18 15:08:05.101883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.959 ms 00:21:41.706 [2024-11-18 15:08:05.101888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.706 [2024-11-18 15:08:05.101913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:21:41.706 [2024-11-18 15:08:05.101927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:41.706 [2024-11-18 15:08:05.101935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:21:41.706 [2024-11-18 15:08:05.101950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:21:41.706 [2024-11-18 15:08:05.101956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.101994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.706 [2024-11-18 15:08:05.102050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:21:41.706 [2024-11-18 15:08:05.102056] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7b24f346-7401-4176-82c6-3f5072062c4c 00:21:41.706 [2024-11-18 15:08:05.102065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:21:41.706 [2024-11-18 15:08:05.102071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:21:41.706 [2024-11-18 15:08:05.102076] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:21:41.707 [2024-11-18 15:08:05.102084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:21:41.707 [2024-11-18 15:08:05.102093] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:21:41.707 [2024-11-18 15:08:05.102100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:21:41.707 [2024-11-18 15:08:05.102106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:21:41.707 [2024-11-18 15:08:05.102112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:21:41.707 [2024-11-18 15:08:05.102117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:21:41.707 [2024-11-18 15:08:05.102123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.707 [2024-11-18 15:08:05.102130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:21:41.707 [2024-11-18 15:08:05.102136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:21:41.707 [2024-11-18 15:08:05.102142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.103767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.707 [2024-11-18 15:08:05.103797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:21:41.707 [2024-11-18 15:08:05.103807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.612 ms 00:21:41.707 [2024-11-18 15:08:05.103813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.103871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:41.707 [2024-11-18 15:08:05.103878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:21:41.707 [2024-11-18 15:08:05.103884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:21:41.707 [2024-11-18 15:08:05.103893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.109865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.109897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:41.707 [2024-11-18 15:08:05.109905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.109911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.109939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.109946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:41.707 [2024-11-18 15:08:05.109952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.109958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.110022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.110031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:41.707 [2024-11-18 15:08:05.110039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.110046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.110067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.110074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:41.707 [2024-11-18 15:08:05.110081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.110088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.121506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.121680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:41.707 [2024-11-18 15:08:05.121699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.121705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.125846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.125876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:41.707 [2024-11-18 15:08:05.125884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.125890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.125946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.125955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:41.707 [2024-11-18 15:08:05.125962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.125972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.126009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:41.707 [2024-11-18 15:08:05.126015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.126021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.126086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:41.707 [2024-11-18 15:08:05.126093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.126099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.126136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:21:41.707 [2024-11-18 15:08:05.126142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.126148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.126191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:41.707 [2024-11-18 15:08:05.126197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.126203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:21:41.707 [2024-11-18 15:08:05.126258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:41.707 [2024-11-18 15:08:05.126265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:21:41.707 [2024-11-18 15:08:05.126270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:41.707 [2024-11-18 15:08:05.126408] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 35.713 ms, result 0 00:21:41.978 15:08:05 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:21:41.978 15:08:05 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:41.978 15:08:05 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:21:41.978 15:08:05 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:21:41.978 15:08:05 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:21:41.978 15:08:05 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:41.978 Remove shared memory files 00:21:41.978 15:08:05 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:21:41.978 15:08:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:41.978 15:08:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:41.978 15:08:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:41.978 15:08:05 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid87167 00:21:41.978 15:08:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:41.978 15:08:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:41.978 ************************************ 00:21:41.978 END TEST ftl_upgrade_shutdown 00:21:41.978 ************************************ 00:21:41.978 00:21:41.978 real 1m5.910s 00:21:41.978 user 1m30.228s 00:21:41.978 sys 0m17.679s 00:21:41.978 15:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:41.978 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:21:41.978 15:08:05 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:21:41.978 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:21:41.978 15:08:05 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:21:41.978 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:21:41.978 15:08:05 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:21:41.978 15:08:05 -- ftl/ftl.sh@14 -- # killprocess 81912 00:21:41.978 15:08:05 -- common/autotest_common.sh@936 -- # '[' -z 81912 ']' 00:21:41.978 15:08:05 -- common/autotest_common.sh@940 -- # kill -0 81912 00:21:41.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81912) - No such process 00:21:41.978 Process with pid 81912 is not found 00:21:41.978 15:08:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81912 is not found' 00:21:41.978 15:08:05 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:21:41.978 15:08:05 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87512 00:21:41.978 15:08:05 -- ftl/ftl.sh@20 -- # waitforlisten 87512 00:21:41.978 15:08:05 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:41.978 15:08:05 -- common/autotest_common.sh@829 -- # '[' -z 87512 ']' 00:21:41.978 15:08:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.978 15:08:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.978 15:08:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.978 15:08:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.978 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:21:41.978 [2024-11-18 15:08:05.437233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:41.978 [2024-11-18 15:08:05.437360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87512 ] 00:21:42.274 [2024-11-18 15:08:05.582064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.274 [2024-11-18 15:08:05.622889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.274 [2024-11-18 15:08:05.623096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.845 15:08:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.845 15:08:06 -- common/autotest_common.sh@862 -- # return 0 00:21:42.845 15:08:06 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:21:43.104 nvme0n1 00:21:43.104 15:08:06 -- ftl/ftl.sh@22 -- # clear_lvols 00:21:43.104 15:08:06 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:43.104 15:08:06 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:43.104 15:08:06 -- ftl/common.sh@28 -- # stores=d433713a-75ed-4801-b07d-b7888aa14850 00:21:43.104 15:08:06 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:43.104 15:08:06 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d433713a-75ed-4801-b07d-b7888aa14850 00:21:43.363 15:08:06 -- ftl/ftl.sh@23 -- # killprocess 87512 00:21:43.363 15:08:06 -- common/autotest_common.sh@936 -- # '[' -z 87512 ']' 00:21:43.363 15:08:06 -- common/autotest_common.sh@940 -- # kill -0 87512 00:21:43.363 15:08:06 -- common/autotest_common.sh@941 -- # uname 00:21:43.363 15:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.363 15:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87512 00:21:43.363 killing process with pid 87512 00:21:43.363 15:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:43.364 15:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:43.364 15:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87512' 00:21:43.364 15:08:06 -- common/autotest_common.sh@955 -- # kill 87512 00:21:43.364 15:08:06 -- common/autotest_common.sh@960 -- # wait 87512 00:21:43.929 15:08:07 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:43.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:43.930 Waiting for block devices as requested 00:21:43.930 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:21:44.188 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:21:44.188 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:44.188 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:49.456 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:21:49.457 Remove shared memory files 00:21:49.457 15:08:12 -- ftl/ftl.sh@28 -- # remove_shm 00:21:49.457 15:08:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:49.457 15:08:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:49.457 15:08:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:49.457 15:08:12 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:49.457 15:08:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:49.457 15:08:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:49.457 ************************************ 00:21:49.457 END TEST ftl 00:21:49.457 ************************************ 00:21:49.457 00:21:49.457 real 7m29.717s 00:21:49.457 user 9m26.766s 00:21:49.457 sys 1m2.659s 00:21:49.457 15:08:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:49.457 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 15:08:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:49.457 15:08:12 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:49.457 15:08:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:49.457 15:08:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:49.457 15:08:12 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:21:49.457 15:08:12 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:21:49.457 15:08:12 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:49.457 15:08:12 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:49.457 15:08:12 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:21:49.457 15:08:12 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:21:49.457 15:08:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.457 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 15:08:12 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:21:49.457 15:08:12 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:21:49.457 15:08:12 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:21:49.457 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:50.391 INFO: APP EXITING 00:21:50.391 INFO: killing all VMs 00:21:50.391 INFO: killing vhost app 00:21:50.391 INFO: EXIT DONE 00:21:50.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.958 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:21:50.958 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:21:50.958 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:50.958 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:51.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.782 Cleaning 00:21:51.782 Removing: /var/run/dpdk/spdk0/config 00:21:51.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:51.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:51.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:51.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:51.782 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:51.782 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:51.782 Removing: /var/run/dpdk/spdk0 00:21:51.782 Removing: /var/run/dpdk/spdk_pid68377 00:21:51.782 Removing: /var/run/dpdk/spdk_pid68541 00:21:51.782 Removing: /var/run/dpdk/spdk_pid68834 00:21:51.782 Removing: /var/run/dpdk/spdk_pid68906 00:21:51.782 Removing: /var/run/dpdk/spdk_pid68985 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69084 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69158 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69197 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69234 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69298 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69371 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69795 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69837 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69878 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69894 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69952 00:21:51.782 Removing: /var/run/dpdk/spdk_pid69968 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70026 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70042 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70090 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70102 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70145 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70162 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70294 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70330 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70407 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70461 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70486 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70548 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70568 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70598 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70619 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70654 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70675 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70710 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70725 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70761 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70781 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70817 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70832 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70867 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70888 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70918 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70938 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70974 00:21:51.782 Removing: /var/run/dpdk/spdk_pid70994 00:21:51.782 Removing: /var/run/dpdk/spdk_pid71030 00:21:51.782 Removing: /var/run/dpdk/spdk_pid71045 00:21:51.782 Removing: /var/run/dpdk/spdk_pid71080 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71103 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71133 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71153 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71189 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71204 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71245 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71260 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71290 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71313 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71349 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71364 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71400 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71423 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71456 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71485 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71518 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71539 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71574 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71595 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71631 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71704 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71800 00:21:51.783 Removing: /var/run/dpdk/spdk_pid71964 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72031 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72058 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72480 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72803 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72901 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72943 00:21:51.783 Removing: /var/run/dpdk/spdk_pid72963 00:21:51.783 Removing: /var/run/dpdk/spdk_pid73046 00:21:51.783 Removing: /var/run/dpdk/spdk_pid73674 00:21:51.783 Removing: /var/run/dpdk/spdk_pid73705 00:21:51.783 Removing: /var/run/dpdk/spdk_pid74154 00:21:51.783 Removing: /var/run/dpdk/spdk_pid74270 00:21:52.041 Removing: /var/run/dpdk/spdk_pid74380 00:21:52.041 Removing: /var/run/dpdk/spdk_pid74417 00:21:52.041 Removing: /var/run/dpdk/spdk_pid74444 00:21:52.041 Removing: /var/run/dpdk/spdk_pid74464 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76375 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76495 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76499 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76511 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76581 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76585 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76597 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76647 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76651 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76663 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76707 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76711 00:21:52.041 Removing: /var/run/dpdk/spdk_pid76723 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78161 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78246 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78362 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78429 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78489 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78543 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78620 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78689 00:21:52.041 Removing: /var/run/dpdk/spdk_pid78824 00:21:52.041 Removing: /var/run/dpdk/spdk_pid79192 00:21:52.041 Removing: /var/run/dpdk/spdk_pid79219 00:21:52.041 Removing: /var/run/dpdk/spdk_pid79642 00:21:52.041 Removing: /var/run/dpdk/spdk_pid79819 00:21:52.041 Removing: /var/run/dpdk/spdk_pid79907 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80006 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80045 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80066 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80519 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80557 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80607 00:21:52.041 Removing: /var/run/dpdk/spdk_pid80972 00:21:52.041 Removing: /var/run/dpdk/spdk_pid81110 00:21:52.041 Removing: /var/run/dpdk/spdk_pid81912 00:21:52.041 Removing: /var/run/dpdk/spdk_pid82027 00:21:52.041 Removing: /var/run/dpdk/spdk_pid82218 00:21:52.041 Removing: /var/run/dpdk/spdk_pid82293 00:21:52.042 Removing: /var/run/dpdk/spdk_pid82594 00:21:52.042 Removing: /var/run/dpdk/spdk_pid82819 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83188 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83392 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83467 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83503 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83586 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83600 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83636 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83794 00:21:52.042 Removing: /var/run/dpdk/spdk_pid83996 00:21:52.042 Removing: /var/run/dpdk/spdk_pid84247 00:21:52.042 Removing: /var/run/dpdk/spdk_pid84505 00:21:52.042 Removing: /var/run/dpdk/spdk_pid84801 00:21:52.042 Removing: /var/run/dpdk/spdk_pid85132 00:21:52.042 Removing: /var/run/dpdk/spdk_pid85268 00:21:52.042 Removing: /var/run/dpdk/spdk_pid85344 00:21:52.042 Removing: /var/run/dpdk/spdk_pid85713 00:21:52.042 Removing: /var/run/dpdk/spdk_pid85761 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86058 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86332 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86686 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86801 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86827 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86880 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86930 00:21:52.042 Removing: /var/run/dpdk/spdk_pid86983 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87167 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87193 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87254 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87304 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87338 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87393 00:21:52.042 Removing: /var/run/dpdk/spdk_pid87512 00:21:52.042 Clean 00:21:52.042 killing process with pid 60598 00:21:52.300 killing process with pid 60603 00:21:52.300 15:08:15 -- common/autotest_common.sh@1446 -- # return 0 00:21:52.300 15:08:15 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:21:52.300 15:08:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.300 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.300 15:08:15 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:21:52.300 15:08:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.300 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.300 15:08:15 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:52.300 15:08:15 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:52.300 15:08:15 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:52.300 15:08:15 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:21:52.300 15:08:15 -- spdk/autotest.sh@383 -- # hostname 00:21:52.300 15:08:15 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:52.300 geninfo: WARNING: invalid characters removed from testname! 00:22:18.911 15:08:38 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:18.911 15:08:41 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.850 15:08:43 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:21.750 15:08:44 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.126 15:08:46 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:25.028 15:08:48 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:26.402 15:08:49 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:26.403 15:08:49 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:22:26.403 15:08:49 -- common/autotest_common.sh@1690 -- $ lcov --version 00:22:26.403 15:08:49 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:22:26.403 15:08:49 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:22:26.403 15:08:49 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:22:26.403 15:08:49 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:22:26.403 15:08:49 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:22:26.403 15:08:49 -- scripts/common.sh@335 -- $ IFS=.-: 00:22:26.403 15:08:49 -- scripts/common.sh@335 -- $ read -ra ver1 00:22:26.403 15:08:49 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:26.403 15:08:49 -- scripts/common.sh@336 -- $ read -ra ver2 00:22:26.403 15:08:49 -- scripts/common.sh@337 -- $ local 'op=<' 00:22:26.403 15:08:49 -- scripts/common.sh@339 -- $ ver1_l=2 00:22:26.403 15:08:49 -- scripts/common.sh@340 -- $ ver2_l=1 00:22:26.403 15:08:49 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:22:26.403 15:08:49 -- scripts/common.sh@343 -- $ case "$op" in 00:22:26.403 15:08:49 -- scripts/common.sh@344 -- $ : 1 00:22:26.403 15:08:49 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:22:26.403 15:08:49 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.403 15:08:49 -- scripts/common.sh@364 -- $ decimal 1 00:22:26.403 15:08:49 -- scripts/common.sh@352 -- $ local d=1 00:22:26.403 15:08:49 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:26.403 15:08:49 -- scripts/common.sh@354 -- $ echo 1 00:22:26.403 15:08:49 -- scripts/common.sh@364 -- $ ver1[v]=1 00:22:26.403 15:08:49 -- scripts/common.sh@365 -- $ decimal 2 00:22:26.403 15:08:49 -- scripts/common.sh@352 -- $ local d=2 00:22:26.403 15:08:49 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:26.403 15:08:49 -- scripts/common.sh@354 -- $ echo 2 00:22:26.403 15:08:49 -- scripts/common.sh@365 -- $ ver2[v]=2 00:22:26.403 15:08:49 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:22:26.403 15:08:49 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:22:26.403 15:08:49 -- scripts/common.sh@367 -- $ return 0 00:22:26.403 15:08:49 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.403 15:08:49 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:22:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.403 --rc genhtml_branch_coverage=1 00:22:26.403 --rc genhtml_function_coverage=1 00:22:26.403 --rc genhtml_legend=1 00:22:26.403 --rc geninfo_all_blocks=1 00:22:26.403 --rc geninfo_unexecuted_blocks=1 00:22:26.403 00:22:26.403 ' 00:22:26.403 15:08:49 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:22:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.403 --rc genhtml_branch_coverage=1 00:22:26.403 --rc genhtml_function_coverage=1 00:22:26.403 --rc genhtml_legend=1 00:22:26.403 --rc geninfo_all_blocks=1 00:22:26.403 --rc geninfo_unexecuted_blocks=1 00:22:26.403 00:22:26.403 ' 00:22:26.403 15:08:49 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:22:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.403 --rc genhtml_branch_coverage=1 00:22:26.403 --rc genhtml_function_coverage=1 00:22:26.403 --rc genhtml_legend=1 00:22:26.403 --rc geninfo_all_blocks=1 00:22:26.403 --rc geninfo_unexecuted_blocks=1 00:22:26.403 00:22:26.403 ' 00:22:26.403 15:08:49 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:22:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.403 --rc genhtml_branch_coverage=1 00:22:26.403 --rc genhtml_function_coverage=1 00:22:26.403 --rc genhtml_legend=1 00:22:26.403 --rc geninfo_all_blocks=1 00:22:26.403 --rc geninfo_unexecuted_blocks=1 00:22:26.403 00:22:26.403 ' 00:22:26.403 15:08:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.403 15:08:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:26.403 15:08:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.403 15:08:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.403 15:08:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.403 15:08:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.403 15:08:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.403 15:08:49 -- paths/export.sh@5 -- $ export PATH 00:22:26.403 15:08:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.403 15:08:49 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:26.403 15:08:49 -- common/autobuild_common.sh@440 -- $ date +%s 00:22:26.403 15:08:49 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731942529.XXXXXX 00:22:26.403 15:08:49 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731942529.XL6YPE 00:22:26.403 15:08:49 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:22:26.403 15:08:49 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:22:26.403 15:08:49 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:26.403 15:08:49 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:26.403 15:08:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:26.403 15:08:49 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:26.403 15:08:49 -- common/autobuild_common.sh@456 -- $ get_config_params 00:22:26.403 15:08:49 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:22:26.403 15:08:49 -- common/autotest_common.sh@10 -- $ set +x 00:22:26.664 15:08:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:22:26.664 15:08:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:26.664 15:08:50 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:26.664 15:08:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:26.664 15:08:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:26.664 15:08:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:26.664 15:08:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:26.664 15:08:50 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:26.664 15:08:50 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:26.664 15:08:50 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:26.664 15:08:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:26.664 + [[ -n 5732 ]] 00:22:26.664 + sudo kill 5732 00:22:26.674 [Pipeline] } 00:22:26.689 [Pipeline] // timeout 00:22:26.694 [Pipeline] } 00:22:26.709 [Pipeline] // stage 00:22:26.714 [Pipeline] } 00:22:26.729 [Pipeline] // catchError 00:22:26.738 [Pipeline] stage 00:22:26.740 [Pipeline] { (Stop VM) 00:22:26.754 [Pipeline] sh 00:22:27.038 + vagrant halt 00:22:29.573 ==> default: Halting domain... 00:22:34.863 [Pipeline] sh 00:22:35.141 + vagrant destroy -f 00:22:37.672 ==> default: Removing domain... 00:22:37.942 [Pipeline] sh 00:22:38.220 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:22:38.229 [Pipeline] } 00:22:38.244 [Pipeline] // stage 00:22:38.249 [Pipeline] } 00:22:38.265 [Pipeline] // dir 00:22:38.271 [Pipeline] } 00:22:38.285 [Pipeline] // wrap 00:22:38.292 [Pipeline] } 00:22:38.304 [Pipeline] // catchError 00:22:38.313 [Pipeline] stage 00:22:38.315 [Pipeline] { (Epilogue) 00:22:38.330 [Pipeline] sh 00:22:38.610 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:42.805 [Pipeline] catchError 00:22:42.808 [Pipeline] { 00:22:42.821 [Pipeline] sh 00:22:43.102 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:43.102 Artifacts sizes are good 00:22:43.110 [Pipeline] } 00:22:43.124 [Pipeline] // catchError 00:22:43.136 [Pipeline] archiveArtifacts 00:22:43.144 Archiving artifacts 00:22:43.276 [Pipeline] cleanWs 00:22:43.302 [WS-CLEANUP] Deleting project workspace... 00:22:43.302 [WS-CLEANUP] Deferred wipeout is used... 00:22:43.318 [WS-CLEANUP] done 00:22:43.320 [Pipeline] } 00:22:43.335 [Pipeline] // stage 00:22:43.341 [Pipeline] } 00:22:43.355 [Pipeline] // node 00:22:43.360 [Pipeline] End of Pipeline 00:22:43.404 Finished: SUCCESS